Adaptive Brownian Dynamics
Abstract
A framework for performant Brownian Dynamics (BD) many-body simulations with adaptive timestepping is presented. Contrary to the Euler-Maruyama scheme in common non-adaptive BD, we employ an embedded Heun-Euler integrator for the propagation of the overdamped coupled Langevin equations of motion. This enables the derivation of a local error estimate and the formulation of criteria for the acceptance or rejection of trial steps and for the control of optimal stepsize. Introducing erroneous bias in the random forces is avoided by Rejection Sampling with Memory (RSwM) due to Rackauckas and Nie, which makes use of the Brownian bridge theorem and guarantees the correct generation of a specified random process even when rejecting trial steps. For test cases of Lennard-Jones fluids in bulk and in confinement, it is shown that adaptive BD solves performance and stability issues of conventional BD, already outperforming the latter even in standard situations. We expect this novel computational approach to BD to be especially helpful in long-time simulations of complex systems, e.g. in non-equilibrium, where concurrent slow and fast processes occur.
I Introduction
Computer simulations have long become an established tool in the investigation of physical phenomena [1, 2, 3]. Complementing experimental results, they build the foundation for the exploration of increasingly complex dynamical systems. From the standpoint of classical statistical mechanics, the simulation of a many-body system consisting of discrete interacting particles can reveal information about its structural correlation as well as its thermodynamic properties. Naturally, this opens up the possibility of tackling many problems in the fields of material science, soft matter and biophysics, such as investigating the dynamics of macromolecules [4], predicting rheological properties of fluids [5, 6, 7], or exploring non-equilibrium processes which occur e.g. in colloidal suspensions under the influence of external forcing [8].
With the ever-increasing capabilities of computer hardware, a variety of different computational methods have emerged since the middle of the last century. Conceptually, at least three distinct classes of particle-based simulation frameworks can be identified: i) Monte-Carlo (MC), which relies on the stochastic exploration of phase space, ii) Molecular Dynamics (MD), in which the set of ordinary differential equations (ODEs) of Hamiltonian dynamics is integrated to obtain particle trajectories, and iii) Langevin Dynamics, where random processes are incorporated into the Newtonian equations of motion so that the evolution of a system is obtained by numerical integration of then stochastic differential equations (SDEs). Brownian Dynamics (BD) can be seen as a special case of iii), since the underlying stochastic Langevin equation is thereby considered in the overdamped limit where particle inertia vanishes and only particle coordinates remain as the sole microscopic degrees of freedom.
Notably, a broad range of refined methods have been developed in all three categories, sometimes even intersecting those. Important examples of such extensions are kinetic Monte-Carlo for the approximation of time evolution from reaction rates [9], the addition of thermostats in MD to model thermodynamic coupling [10, 11], event-driven algorithms which enable both MD and BD in hard-particle systems [12, 13], and the adaptation of molecular algorithms to modern hardware [14]. Improvements in the calculation of observables from the resulting particle configurations have been made as well, e.g. by modifying their generation in MC (umbrella sampling, transition matrix MC [15], Wang-Landau sampling [16]) or by utilizing advanced evaluation schemes in MD and BD, such as force sampling [17, 18, 19] or adaptive resolution MD [20].
The efficiency and accuracy of a certain algorithm are always primary concerns, as these properties are essential for applicability and practicability in real-world problems. One therefore aims to design procedures that are both as fast and as precise as possible – yet it is no surprise that those two goals might often be conflicting. Especially in BD, where stochastic processes complicate the numerical treatment, the development of more sophisticated algorithms apparently lacks behind that of MD, for example, and one often resorts to alternative or combined simulation techniques [21, 22]. If the full dynamical description of BD is indeed considered, the equations of motion are usually integrated with the simple Euler-Maruyama method [23], where stochasticity is accounted for in each equidistant step via normally distributed random numbers. This can lead to inaccuracies and stability problems, making BD seem inferior to other computational methods.
In this work, we propose a novel approach to BD simulations which rectifies the above shortcomings of conventional BD. To achieve this, we employ an adaptive timestepping algorithm that enables the control of the numerical error as follows. The common Euler-Maruyama method is complemented with a higher-order Heun step to obtain an embedded integrator pair for an estimation of the local discretisation error per trial step. By comparison of this error estimate with a prescribed tolerance, the trial step is either accepted or it is rejected and then retried with a smaller stepsize. Particular care is required after rejections so as to not introduce a bias in the random forces which would violate their desired properties. We therefore use Rejection Sampling with Memory (RSwM) [24] to retain a Gaussian random process even in a scenario where already determined random increments may conditionally be discarded. RSwM is a recently developed algorithm for the adaptive generation of random processes in the numerical solution of SDEs, which we improve and specialize to our context of overdamped Brownian motion and thereby formulate a method for adaptive BD simulations.
We demonstrate the practical advantages of adaptive BD over common BD in simulation results for prototypical bulk equilibrium systems and for more involved cases in non-equilibrium. A notable example which we investigate is the drying of colloidal films at planar surfaces. Especially when dealing with non-trivial mixtures, as e.g. present in common paints and coatings, the dynamics of this process can be inherently complex and its quantitative description turns out to be a major challenge [25, 26]. This stands in contrast to the necessity of understanding and predicting stratification processes in those systems. Stratification leads to a dried film that has multiple layers differing in the concentration of constituent particle species, thereby influencing macroscopic properties of the resulting colloidal film. Therefore, controlling this process is an important measure to tailor colloidal suspensions to their field of application. Advances in this area have been made experimentally [27, 28], by utilizing functional many-body frameworks like dynamical density functional theory (DDFT) [29], and with molecular simulations such as conventional BD [30]. By employing the adaptive BD method, we are able to capture the complex dynamical processes occuring in those systems even in the final dense state. Close particle collisions and jammed states are resolved with the required adjustment of the timestep, necessary for the stability and accuracy of the simulation run in those circumstances. This can not be achieved easily with common BD.
This paper is structured as follows. In Sec. II, a brief and non-rigorous mathematical introduction to the numerical solution of SDEs is given. Particularly, we illustrate the prerequisites for adaptive and higher-order algorithms in the case of general SDEs and emphasize certain pitfalls. In Sec. III, these considerations are applied to the case of Brownian motion. We construct the embedded Heun-Euler integration scheme in Sec. III.1 and incorporate RSwM in Sec. III.2, which yields the adaptive BD method. Observables can then be sampled from the resulting particle trajectories with the means illustrated in Sec. III.3. In Sec. IV, simulation results of the above mentioned Lennard-Jones systems are shown and the practical use of adaptive BD is confirmed. In Sec. V, we conclude with a summary of the shown concepts, propose possible improvements for the adaptation of timesteps, and present further ideas for use cases.
II Numerics of Stochastic Differential Equations
Brownian Dynamics of a classical many-body system of particles in spatial dimensions with positions at time and temperature is described by the overdamped Langevin equation. The trajectory of particle satisfies
(1) |
where is the total force (composed of external and interparticle contributions) acting on particle , is the friction coefficient of particle and is Boltzmann’s constant; the dot denotes a time derivative. In eq. (1), the right-hand side consists of a deterministic (first summand) and a random contribution (second summand). The random forces are modeled via multivariate Gaussian white noise processes which satisfy
(2) | ||||
(3) |
where denotes an average over realizations of the random process, is the unit matrix, denotes the Kronecker delta, and is the Dirac delta function.
One can recognize that eq. (1) has the typical form of an SDE
(4) |
if the dependent random variable is identified with the particle positions and is a Wiener process corresponding to the integral of the Gaussian processes . As we do not consider hydrodynamic interactions, the random forces in eq. (1) are obtained by a mere scaling of with the constant prefactors . Therefore, the noise in BD is additive, since in the sense of eq. (4). This is a crucial property for the construction of a simple higher-order integrator below in Sec. III.
In computer simulations, particle trajectories are obtained from eq. (1) by numerical integration. Contrary to the numerics of ODEs, where higher-order schemes and adaptivity are textbook material, the derivation of corresponding methods for SDEs poses several difficulties which we address below. Due to the complications, SDEs of type (4) are often integrated via the Euler-Maruyama method instead, which follows the notion of the explicit Euler scheme for ODEs. Thereby, the true solution of eq. (4) with initial value is approximated in by partitioning the time interval into equidistant subintervals of length . Then, for , a timestep is defined by
(5) |
with Wiener increments . An Euler-Maruyama step is also incorporated in the adaptive BD method which we construct below, applying eq. (5) to the overdamped Langevin equation (1).
Crucially, the random increments in each Euler-Maruyama step (5) have to be constructed from independent and identically distributed normal random variables with expectation value and variance . In practice, this is realized by drawing a new random number (or vector thereof) from a pseudo-random number generator obeying the normal distribution in each step and setting in eq. (5). The process of obtaining such a scalar (or vectorial) random increment will be denoted in the following by
(6) |
where is a scalar (or multivariate) normal distribution with expectation value and variance .
As in the case of ODEs, an important measure for the quality of an integration method is its order of convergence. However, what convergence exactly means in the case of SDEs must be carefully reconsidered due to their stochasticity. We refer the reader to the pertinent literature (see e.g. Ref. 23) and only summarize the key concepts and main results in the following.
Since both the approximation and the true solution are random variables, one can define two distinct convergence criteria. For a certain method with discretisation , weak convergence captures the error of average values whereas strong convergence assesses the error of following a specific realization of a random process. One can show that the Euler-Maruyama method has a strong convergence order of 0.5, i.e. when increasing or decreasing the stepsize , the error of the numerical solution only scales with . For general in eq. (4), the construction of schemes with higher strong order is complicated due to the occurence of higher-order stochastic integrals. Practically, this means that the careful evaluation of additional random variables is necessary in each iteration step. These random variables then enable the approximation of the stochastic integrals. There exist schemes of Runge-Kutta type with strong orders up to 2, although only strong order 1 and 1.5 Runge-Kutta methods are mostly used due to practical concerns [31, 32].
In order to incorporate adaptivity, one needs a means of comparison of two integration schemes of different strong order to formulate a local error estimate for the proposed step. If the error that occurs in a specific timestep is too large (we define below precisely what we mean by that), the step is rejected and a retrial with a smaller value of is performed. Otherwise, the step is accepted and, based on the previously calculated error, a new optimized stepsize is chosen for the next step, which hence can be larger than the previous one. This protocol makes an optimal and automatic control of the stepsize possible, meaning that can both be reduced when the propagation of the SDE is difficult and it can be increased when the error estimate allows to do so. Similar to the case of ODEs, it is computationally advantageous to construct so-called embedded integration schemes, analogous to e.g. Runge-Kutta-Fehlberg integrators [33], which minimize the number of costly evaluations of the right-hand side of eq. (4). Developments have been made in this direction e.g. with embedded stochastic Runge-Kutta methods [34].
There is still one caveat to consider when rejecting a step, in that one has to be careful to preserve the properties of the Wiener process. In the naive approach of simply redrawing new uncorrelated random increments, rejections would alter the random process implicitly. The reason lies in the introduction of an undesired bias. Since large random increments (generally causing larger errors) get rejected more often, the variance of the Wiener process would be systematically decreased, ultimately violating its desired properties. To avoid this effect, it must be guaranteed that once a random increment is chosen, it will not be discarded until the time interval it originally applied to has passed. Consequently, when rejecting a trial step and retrying the numerical propagation of the SDE with smaller time intervals, new random increments cannot be drawn independently for those substeps anymore. The new random increments must instead be created based on the rejected original timestep such that an unbiased Brownian path is still followed.
The formal framework to the above procedure is given by the so-called Brownian bridge theorem [35], which interpolates a Wiener process between known values at two timepoints. If and are given (e.g. due to the previous rejection of a timestep of length where the random increment has been drawn), then a new intermediate random value must be constructed by
(7) |
such that the statistical properties of the to-be-simulated Wiener process are left intact and a substep can be tried. The value of thereby sets the fraction of the original time interval to which the Wiener process shall be interpolated. Equation (7) extends naturally (i.e. component-wise) to the multivariate case and it hence enables the construction of interpolating random vectors in a straightforward manner.
With this idea in mind, several adaptive timestepping algorithms for SDEs have been designed [36, 37, 38, 39, 40, 41, 42]. Still, most of these approaches are quite restrictive in the choice of timesteps (e.g. only allowing discrete variations such as halving or doubling) [36, 42], involve the deterministic or random part only separately into an a-priori error estimation [39, 41, 43], or store rejected timesteps in a costly manner, e.g. in the form of Brownian trees [36]. In particular, the above methods rely on precomputed Brownian paths and do not illustrate an ad hoc generation, which is desirable from a performance and memory consumption standpoint in a high-dimensional setting like BD.
In contrast, a very flexible and performant class of adaptive timestepping algorithms called Rejection Sampling with Memory (RSwM) has recently been developed by Rackauckas and Nie [24]. Their work provides the arguably optimal means for the adaptive numerical solution of SDEs while still being computationally efficient in the generation of random increments as well as in the handling of rejections. We therefore use RSwM in the construction of an adaptive algorithm and specialize the method to Brownian motion in the following.
III Application to Brownian Dynamics
Based on the remarks of Sec. II, we next proceed to apply the general framework to the case of BD with the overdamped Langevin equation (1) forming the underlying SDE. An embedded integration scheme is constructed which allows the derivation of an error estimate and an acceptance criterion in each step. Furthermore, the application of RSwM for handling rejected timesteps in BD is shown and discussed. We also illustrate how the calculation of observables from sampling of phase space functions has to be altered in a variable timestep scenario.
III.1 Embedded Heun-Euler method
Regarding the overdamped Langevin equation (1), a major simplification exists compared to the general remarks made in Sec. II. Due to the noise term being trivial, some higher-order schemes can be constructed by only evaluating the deterministic forces for different particle configurations. Crucially, no iterated stochastic integrals are needed, which would have to be approximated in general higher strong-order integrators by using additional random variables [32]. In the following, we apply a scheme similar to the one suggested by Lamba, Mattingly, and Stuart [41] for general SDEs to eq. (1) and term it embedded Heun-Euler method due to its resemblance to the corresponding ODE integrators. Two different approximations and are calculated in each trial step by
(8) | ||||
(9) |
Equation (8) is the conventional Euler-Maruyama step, and hence constitutes the application of eq. (5) to the overdamped Langevin equation (1). Equation (9) resembles the second order Heun algorithm or midpoint scheme for ODEs [44] and has been formally derived for SDEs in the context of stochastic Runge-Kutta methods [45]. Since the deterministic forces are evaluated at the initial particle configuration both in eq. (8) and eq. (9), we have constructed an embedded integration method. This is favorable regarding computational cost, since the numerical result of is evaluated once in eq. (8) and reused in eq. (9). Only one additional computation of the deterministic forces at the intermediate particle configuration is then needed in the Heun step (9).
In each trial step, the same realization of random vectors must be used in both eqs. (8) and (9). Recall that the random displacements have to obey the properties of multivariate Wiener increments to model the non-deterministic forces. In conventional BD with fixed stepsize , these random vectors can therefore be drawn independently via for each particle . If rejections of trial steps are possible, must instead be constructed as described in Sec. III.2.
When the embedded Heun-Euler step (8) and (9) is applied to BD, the improvement over the conventional method is twofold. Firstly, the Heun step (9) can be used as a better propagation method as already analyzed by Fixman [46] and Iniesta and de la Torre [47]. Several further higher-order schemes, mostly of Runge-Kutta type, have been used in BD simulations [45, 48]. These methods often lead to increased accuracy and even efficiency due to bigger timesteps becoming achievable, which outweighs the increased computational cost per step. Since the prefactor of the random force is trivial (i.e. constant) in the overdamped Langevin equation (1), higher strong-order schemes are easily constructable. This situation stands in contrast to the more complicated SDEs of type (4) with general noise term .
Secondly, with two approximations of different order at hand, assessing their discrepancy allows to obtain an estimate of the discretisation error in each step, which is a fundamental prerequisite in the construction of adaptive timestepping algorithms. For this, we exploit the additive structure of the noise term in eq. (1) again and recognize that such an error estimate can be obtained by only comparing the deterministic parts of eqs. (8) and (9). Nevertheless, note that the random displacements are already contained in . This makes the deterministic part of eq. (9) implicitly dependent on the realization of , which is opposed to Ref. 41 where an error estimate is defined without involving the random increments at all.
At a given step , one can construct the automatic choice of an appropriate . For this, the error of a trial step with length is evaluated.
We define a particle-wise error
(10) |
with and . For each particle , this error is compared to a tolerance
(11) |
consisting of an absolute and a relative part with user-defined coefficients and . Note that is used in eq. (11), since it captures the true particle displacement more accurately than due to its higher order. Additionally, we stress that decreases on average for shorter timesteps, such that indirectly depends on the trial , which limits the accumulation of errors after multiple small steps.
Then a total error estimate
(12) |
can be calculated for the trial step. While the Euclidean norm is the canonical choice in eq. (10), there is a certain freedom of choosing an appropriate norm in eq. (12). The - or -norm defined respectively as
(13) | ||||
(14) |
may both come up as natural and valid options (note that we normalise the standard 2-norm by to obtain an intensive quantity). However, in eq. (12), where a reduction from particle-wise errors to a global scalar error takes place, this has crucial implications to the kind of error that is being controlled. If the -norm is used, then and set the mean absolute and relative tolerance for all particles. In practice for large particle number , this can lead to substantial single-particle errors becoming lost in the global average. Therefore, it is advisable to use the -norm for the reduction in eq. (12) to be able to set a maximum single-particle absolute and relative tolerance, i.e. if , for all .
Following the design of adaptive ODE solvers and ignoring stochasticity, an expansion of an embedded pair of methods with orders and shows that an error estimate of type (12) is of order . Thus, a timestep of length with
(15) |
could have been chosen to marginally satisfy the tolerance requirement .
Considering the recommendation of Ref. 24 which discusses the application of such a timestep scaling factor to embedded methods for SDEs, we set
(16) |
Here both a more conservative exponent is chosen than in eq. (15) and also a safety factor is introduced, as we want to account for stochasticity and for the low order of our integrators, which results in a low order of the error estimate (12).
With the choice (16), one can distinguish two possible scenarios in each trial step:
-
•
: Accept the trial step, i.e. set and advance the particle positions with eq. (9), and then continue with .
-
•
: Reject the trial step and retry with a smaller timestep.
In both cases, the timestep is adapted afterwards via . Here and in the following, the notation denotes an assignment of the value to the variable .
It is advisable to restrict the permissible range of values for by defining lower and upper bounds such that the adaptation of is done with
(17) |
While commonly chosen as and for ODE solvers, due to stochasticity and the possibility of drawing “difficult” random increments, should certainly be decreased in the case of SDEs to avoid multiple re-rejections. Vice versa, a conservative choice of prevents an overcorrection of the timestep in case of “fortunate” random events. We set and in practical applications to achieve a rapid adaptation in case of rejected trial steps and a careful approach to larger timesteps after accepted moves.
One can also impose limits for the range of values of , such as a maximum bound . Restriction by a minimum value , however, could lead to a forced continuation of the simulation with an actual local error that lies above the user-defined tolerance. Thus, this is not recommended. In our test cases described below, we see no need to restrict the timestep as the adaptive algorithm does not show unstable behavior without such a restriction.
Most concepts of this section can be generalized in a straightforward manner to non-overdamped Langevin dynamics, where particle inertia is explicitly considered in the stochastic equations of motion. However, the embedded Heun-Euler method (8) and (9) might not be a suitable integration scheme in this case. We therefore outline the modifications that are necessary in the construction of an adaptive algorithm for general Langevin dynamics in Appendix A.
III.2 Rejection Sampling with Memory in BD
We still have to prescribe the generation of the random vectors that appear both in eqs. (8) and (9). For clarity, we consider a single trial step and denote the set of corresponding random increments with (the sub- and superscript is dropped). Obviously, can no longer be chosen independently in general but has to incorporate previously rejected timesteps as long as they are relevant, i.e. as long as their original time interval has not passed yet. For this, we apply the Rejection Sampling with Memory (RSwM) algorithm to BD. Rackauckas and Nie [24] describe three variants of RSwM which they refer to as RSwM1, RSwM2, and RSwM3, and which differ in generality of the possible timesteps and algorithmic complexity. We aim to reproduce RSwM3 because it enables optimal timestepping and still ensures correctness in special but rare cases such as re-rejections.
Common to all of the RSwM algorithms is storing parts of rejected random increments onto a stack . We refer to elements of this stack as future information, since they have to be considered in the construction of future steps. This becomes relevant as soon as a step is rejected due to a too large error. Then a retrial is performed by decreasing and drawing bridging random vectors via eq. (7). The difference between rejected and bridging random vectors must not be forgotten but rather be stored on the future information stack, cf. fig. 1.
On the other hand, if a step is accepted, new random vectors have to be prepared for the next time interval . If no future information is on the stack, this can be done conventionally via drawing Gaussian distributed random vectors according to . If the stack is not empty and thus future information is available, then elements of the stack are popped one after another, i.e. they are taken successively from the top. The random vectors as well as the time interval of each popped element are accumulated in temporary variables which then hold the sum of those respective time intervals and random vectors. The stack could be empty before the accumulated time reaches , i.e. there could still be a difference . In this case, one draws new random vectors to compensate for the difference and adds them to the accumulated ones, cf. fig. 2, before attempting the trial step. Otherwise, if the future information reaches further than , then there is one element that passes . One takes this element, splits it in “before ” and “after ” and draws bridging random vectors for “before ” according to eq. (7) which are again added to the accumulated ones. The rest of this element (“after ”) can be pushed back to the future information stack and the step can be tried with the accumulated vectors set as in eqs. (8) and (9), cf. fig. 3.
At this stage, we have constructed RSwM2 for BD, which is not capable of handling all edge cases yet as pointed out in Ref. 24. If future information is popped from the stack to prepare for the next step, and this next step is then rejected, we have lost all popped information unrecoverably. To circumvent this, one adds a second stack that stores information which is currently used for the construction of . We refer to elements of this stack as information in use. If a step is rejected, the information in use can be moved back to the future information stack so that no elements are lost in multiple retries, cf. fig. 4. With this additional bookkeeping, correctness and generality is ensured in all cases and the RSwM3 algorithm is complete.
Notably, with the structuring of information into stacks where only the top element is directly accessible (“last in first out”), the chronological time order is automatically kept intact so that one only has to store time intervals and no absolute timepoints. Furthermore, searching or sorting of elements is prevented entirely which makes all operations and leads to efficient implementations.
We point out that the original RSwM3 rejection branch as given in Ref. 24 was not entirely correct and draw a comparison to our rectifications in Appendix B, which have been brought to attention 111This issue has been discussed in https://github.com/SciML/DiffEqNoiseProcess.jl/pull/80. and have since been fixed in the reference implementation DifferentialEquations.jl [50]. Crucially, the correction not only applies to the case of BD, but it rather is relevant for the solution of general SDEs as well. A full pseudocode listing of one adaptive BD trial step utilizing RSwM3 is given in alg. 2 in Appendix C along with further explanation of technical details.222The corresponding implementation in C++ can be found in https://gitlab.uni-bayreuth.de/bt306964/mbd
III.3 Sampling of observables
Within BD, observables can be obtained from the sampling of configuration space functions. As an example, consider the one-body density profile , which is defined as the average of the density operator .
In simulations of equilibrium or steady states, one can use a time-average over a suitably long interval to measure such quantities, i.e. for a general operator ,
(18) |
Note that the remaining dependence on can consist of arbitrary scalar or vectorial quantities or also be empty. For example, or for general two- or one-body fields, for the isotropic radial distribution function or for bulk quantities such as pressure or heat capacity.
Practically, is evaluated in each step and an -resolved histogram is accumulated which yields after normalization. Considering the numerical discretisation of into timesteps of constant length within a conventional BD simulation, eq. (18) is usually implemented as
(19) |
In adaptive BD with varying timestep length, one cannot use eq. (19) directly, since this would cause disproportionately many contributions to from small timesteps and would thus lead to a biased evaluation of any observable. Formally, the quadrature in eq. (18) now has to be evaluated numerically at non-equidistant integration points.
Therefore, if the time interval is discretized into non-equidistant timepoints and ,
(20) |
constitutes a generalization of eq. (19) for this case that enables a straightforward sampling of observables within adaptive BD.
An alternative technique can be employed in scenarios where the state of the system shall be sampled on a sparser time grid than the one given by the integration timesteps. Then, regular sampling points can be defined that must be hit by the timestepping procedure (e.g. by artificially shortening the preceding step). On this regular time grid, eq. (19) can be used again. Especially in non-equilibrium situations and for the measurement of time-dependent correlation functions such as the van Hove two-body correlation function [52, 53, 54, 55, 56, 57], this method might be beneficial since quantities can still be evaluated at certain timepoints rather than having to construct a time-resolved histogram consisting of finite time intervals. Note however, that timestepping to predefined halting points is not yet considered in alg. 2.
IV Simulation results
To test and illustrate the adaptive BD algorithm, we investigate the truncated and shifted Lennard-Jones fluid with interaction potential
(21) |
where
(22) |
and is the interparticle distance. We set a cutoff radius of throughout the next sections and use reduced Lennard-Jones units which yield the reduced timescale .
A common problem in conventional BD simulations is the choice of an appropriate timestep. Obviously, a too small value of – while leading to accurate trajectories – has a strong performance impact, hindering runs which would reveal long time behavior and prohibiting extensive sampling periods, which are desirable from the viewpoint of the time average (20). Still, must be kept below a certain threshold above which results might be wrong or the simulation becomes unstable. Unfortunately, due to the absence of any intrinsic error estimates, judgement of a chosen is generally not straightforward. For instance, one can merely observe the stability of a single simulation run and accept if sensible output is produced and certain properties of the system (such as its energy) are well-behaved. Another possibility is the costly conduction of several simulation runs with differing timesteps, thereby cross-validating gathered results. Consequently, a true a priori choice of the timestep is not possible in general and test runs cannot always be avoided.
With adaptive timestepping, this problem is entirely prevented as one does not need to make a conscious choice for at all. Instead, the maximum local error of a step is restricted by the user-defined tolerance (11), ensuring correctness of results up to a controllable discretisation error. This does come at the moderate cost of overhead due to the additional operations per step necessary in the embedded Heun-Euler method and the RSwM algorithm. However, as we demonstrate in the following, the benefits of this method far outweigh the cost even in simple situations.
IV.1 Lennard-Jones bulk fluid in equilibrium
In the following, we compare results from conventional BD to those obtained with adaptive BD. We first consider a bulk system of size with periodic boundary conditions at temperature consisting of Lennard-Jones particles initialized on a simple cubic lattice. In the process of equilibration, a gaseous and a liquid phase emerge, and the system therefore becomes inhomogeneous.
With non-adaptive Euler-Maruyama BD, a timestep is chosen to consistently converge to this state. This value is small enough to avoid severe problems which occur reproducibly for , where the simulation occasionally crashes or produces sudden jumps in energy due to faulty particle displacements.
In contrast, the timestepping of an adaptive BD simulation run is shown both as a timeseries and as a histogram in fig. 5. The tolerance coefficients in eq. (11) are thereby set to and and the -norm is used in the reduction from particle-wise to global error (12). One can see that large stepsizes up to occur without the error exceeding the tolerance threshold. The majority of steps can be executed with a timestep larger than the value . On the other hand, the algorithm is able to detect moves that would cause large errors where it decreases appropriately. It is striking that in the shown sample, minimum timesteps as small as occur. This is far below the stepsize of chosen in the fixed-timestep BD run above, which indicates that although the simulation is stable for this value, there are still steps which produce substatial local errors in the particle trajectories. For even larger values of , it is those unresolved collision events that cause unphysical particle displacements which then cascade and crash the simulation run. In comparison, the adaptive BD run yields a mean timestep of , which is larger than the heuristically chosen fixed timestep.

Performance and overhead
Per step, due to an additional evaluation via the Heun method (9), twice the computational effort is needed to calculate the deterministic forces compared to a single evaluation of the Euler-Maruyama step (8). This procedure alone has the benefit of increased accuracy though, and it hence makes larger timesteps feasible.
The computational overhead due to adaptivity with RSwM3 comes mainly from storing random increments on both stacks and applying this information in the construction of new random forces. Therefore, potential for further optimization lies in a cache-friendly organization of the stacks in memory as well as in circumventing superfluous access and copy instructions altogether. The latter considerations suggest that avoiding rejections and more importantly avoiding re-rejections is crucial for good performance and reasonable memory consumption of the algorithm. As already noted, in practice this can be accomplished with a small value of that allows for a rapid reduction of in the case of unfortunate random events and a conservative value of to avoid too large timestepping after moves with fortunate random increments. In our implementation, the cost of RSwM routines is estimated to lie below of the total runtime in common situations.
IV.2 Non-equilibrium – formation of colloidal films
While the benefits of adaptive BD are already significant in equilibrium, its real advantages over conventional BD become particularly clear in non-equilibrium situations. Due to the rich phenomenology – which still lacks a thorough understanding – and the important practical applications, the dynamics of colloidal suspensions near substrates and interfaces has been the center of attention in many recent works [25, 27, 30, 58, 28, 26, 29]. Nevertheless, the simulation of time-dependent interfacial processes is far from straightforward and especially for common BD, stability issues are expected with increasing packing fraction.
In the following, we apply adaptive BD to systems of Lennard-Jones particles and simulate evaporation of the implicit solvent. This is done by introducing an external potential that models the fixed substrate surface as well as a moving air-solvent interface. As in Ref. 29, we set
(23) |
to only vary in the -direction and assume periodic boundary conditions in the remaining two directions. The value of controls the softness of the substrate and the air-solvent interface while sets their strength. We distinguish between the different particle sizes to account for the offset of the particle centers where the external potential is evaluated. The position of the air-solvent interface is time-dependent and follows a linear motion with initial position and constant velocity .
In the following, systems in a box which is elongated in -direction are considered. To ensure dominating non-equilibrium effects, values for the air-solvent interface velocities are chosen which yield large Péclet numbers where is the Einstein-Smoluchowski diffusion coefficient and due to Stokes.
When attempting molecular simulations of such systems in a conventional approach, one is faced with a non-trivial choice of the timestep length since it has to be large enough to be efficient in the dilute phase but also small enough to capture the motion of the dense final state of the system. A cumbersome solution would be a subdivision of the simulation into subsequent time intervals, thereby choosing timesteps that suit the density of the current state. This method is beset by problems as the density profile becomes inhomogeneous and is not known a priori.
We show that by employing the adaptive BD method of Sec. III, these issues become non-existent. Concerning the physical results of the simulation runs, the automatically chosen timestep is indeed closely connected to the increasing packing fraction as well as to the structural properties of the respective colloidal system, as will be discussed below. Similar test runs as the ones shown in the following but carried out with constant timestepping and a naive choice of frequently lead to instabilities in the high-density regimes, ultimately resulting in unphysical trajectories or crashes of the program. Due to the possiblity of stable and accurate simulations of closely packed phases with adaptive BD, we focus on the investigation of the final conformation of the colloidal suspension.
IV.2.1 Single species, moderate driving
Firstly, a single species Lennard-Jones system is studied and the box size is chosen as . The Lennard-Jones particles are initialized on a simple cubic lattice with lattice constant and the velocity of the air-solvent interface is set to . We set to accomodate for smaller particle displacements in the dense phase and relax the relative tolerance to .
As one can see in fig. 6, the timestep is automatically adjusted as a reaction to the increasing density. In the course of the simulation run, the average number density increases from approximately to , although the particles first accumulate near the air-solvent interface. Astonishingly, even the freezing transition at the end of the simulation run can be captured effortlessly. This illustrates the influence of collective order on the chosen timestep. With rising density of the Lennard-Jones fluid, the timestep decreases on average due to the shorter mean free path of the particles and more frequent collisions. At this stage, varies significantly and very small timesteps maneuver the system safely through jammed states of the disordered fluid. In the process of crystallization, the timestep decreases rapidly to accomodate for the reduced free path of the particles before it shortly relaxes to a plateau when crystal order is achieved. Additionally, the variance of decreases and, contrary to the behaviour in the liquid phase, fewer jammed states can be observed. This is due to the crystal order of the solid phase, which prevents frequent close encounters of particles and hence alleviates the need for a rapid reduction of .

IV.2.2 Single species, strong driving
If the evaporation rate is increased via a faster moving air-solvent interface, the final structure of the colloidal suspension is altered. Particularly, with rising air-solvent velocity , a perfect crystallization process is hindered and defects in the crystal structure occur. This is reflected in the timestepping evolution, as jammed states still happen frequently in the dense regime due to misaligned particles. Therefore, unlike in the previous case of no defects, sudden jumps to very small timesteps can still be observed as depicted in fig. 7.
If the velocity of the air-solvent interface is increased even further, amorphous states can be reached, where no crystal order prevails. Nevertheless, our method is still able to resolve the particle trajectories of those quenched particle configurations so that the simulation remains both stable and accurate even in such demanding circumstances.

IV.2.3 Binary mixture
Returning to stratification phenomena, we consider mixtures of particles differing in size. Depending on the Péclet numbers of the big (subscript ) and small (subscript ) particle species and their absolute value (i.e. if or ), different structures of the final phase can emerge, ranging from “small-on-top” or “big-on-top” layering to more complicated conformations [29]. For large Péclet numbers , studies of Fortini et al. [58] have shown the formation of a “small-on-top” structure, i.e. the accumulation of the small particle species near the moving interface. Additionally, in the immediate vicinity of the air-solvent boundary, a thin layer of big particles remains trapped due to their low mobility.
In the following, a binary mixture of Lennard-Jones particles with diameters and is simulated in a system of size and the velocity of the air-solvent interface is set to as before. We initialize big and small particles uniformly in the simulation domain and particularly focus in our analysis on the structure of the final dense phase.
As the simulation advances in time, the observations of Fortini et al. [58] can be verified. A thin layer of big particles at the air-solvent interface followed by a broad accumulation of small particles emerges. The timestepping evolution shows similar behaviour to the single species case shown in fig. 6. On average, the value of decreases and throughout the simulation, jumps to low values occur repeatedly when interparticle collisions have to be resolved accurately.
As the system approaches the dense regime, the finalizing particle distribution of the dried colloidal suspension can be investigated. One can see that a thin layer of big particles develops in close proximity to the substrate, similar to the one forming throughout the simulation at the air-solvent interface. This process can again be explained by the lower mobility of the big particles compared to the small ones, which prevents a uniform diffusion away from the substrate.
Moreover, as the packing fraction increases further, the structure of the interfacial layers of the trapped big species changes. While only a single peak is visible at first, a second peak develops in the last stages of the evaporation process. This phenomenon occurs both at the substrate as well as at the air-solvent interface, although its appearance happens earlier and more pronounced at the former.
Even for this simple model mixture of colloidal particles which differ only in diameter, the final conformation after evaporation of the implicit solvent possesses an intricate structure. Both at the substrate as well as at the top of the film, a primary and secondary layer of the big particle species build up. Those layers enclose a broad accumulation of the small particle species, which is by no means uniform but rather develops a concentration gradient in the positive -direction, outlined by peaks of the respective density profile close to the big particle layers. The formation of the described final state is illustrated in fig. 8, where the density profiles of both particle species are shown for two timepoints at the end of the simulation run.

V Conclusion
In this work, we have constructed a novel method for BD simulations by employing recently developed algorithms for the adaptive numerical solution of SDEs to the case of Brownian motion as described on the level of the overdamped Langevin equation (1). For the evaluation of a local error estimate in each trial step, we have complemented the simple Euler-Maruyama scheme (8) found in common BD with a higher-order Heun step (9). By comparison of their discrepancy with a user-defined tolerance (11) composed of an absolute and a relative contribution, we were able to impose a criterion (16) for the acceptance or rejection of the trial step and for the adaptation of . Special care was thereby required in the reduction from particle-wise errors (10) to a global scalar error estimate (12).
Due to the stochastic nature of Brownian motion, the rejection and subsequent retrial of a timestep could not be done naively by redrawing uncorrelated random vectors in this case. Instead, a sophisticated method had to be employed to ensure the validity of the statistical properties of the random process in eq. (1) and hence to avoid biased random forces from a physical point of view. We have illustrated that the Brownian bridge theorem (7) resolves this problem formally, and that RSwM [24] provides a feasible implementation method based on this theorem. Hence, we specialized RSwM to the case of Brownian motion in Sec. III.2 and constructed the fully adaptive BD scheme outlined in alg. 2. A correction to the original algorithm of Ref. 24 is given in Appendix B and a generalization of adaptive BD to non-overdamped Langevin dynamics is outlined in Appendix A.
To test our framework, we applied the adaptive BD method to both equilibrium and non-equilibrium Lennard-Jones systems focusing on the analysis of individual trajectories. Even in the standard case of a phase-separating bulk fluid, we could verify that the use of RSwM induced no significant computational overhead and that a performance gain could be achieved compared to the fixed-timestep Euler-Maruyama method. This is complemented by practical convenience, since needs not be chosen a priori.
The real advantages of adaptive BD become clear in more demanding situations where a fixed timestep would ultimately lead to inaccuracy and instability of the simulation without manual intervention. We have shown this with non-equilibrium systems of drying colloidal suspensions and have modeled the rapid evaporation of the implicit solvent by a moving interface as in Ref. 29. With our method, we achieved efficient timestepping and unconditional stability of the simulation even in the final stages where the packing fraction increases rapidly and the final structural configuration of the dried film develops. Particularly, in a single-species Lennard-Jones system, the freezing transition could be captured effortlessly even if crystal defects remained as a result of strong external driving. In the more elaborate case of a binary mixture of different-size particles, the intricate structure of the colloidal film could be resolved accurately. Here, we reported the development of a dual layer of the big particle species both at the substrate as well as at the top interface, while a concentration gradient of the small particle species could be observed in-between. This shows that the method can be used to predict the arrangement of stratified films efficiently and with great detail, which could be helpful in the determination of their macroscopic properties.
We expect similar benefits in all sorts of situations where fundamental changes within the system call for more accurate or more relaxed numerical timestepping. For instance, this could be the case in sheared systems [59, 60], for sedimenting [61, 62] or cluster-forming [63] colloidal suspensions, in microrheological simulations [64], for phase transitions and especially the glass transition [65], in the formation of crystals and quasicrystals [66, 67], in active matter [68, 69, 70, 71], possibly including active crystallization [72, 73], as well as for responsive colloids [74]. In any respect, adaptive BD can help in a more accurate measurement of observables, since the evaluation of the average (20) is not hampered by erroneous particle configurations, which might occur in conventional BD. Especially for quantities with a one-sided bias such as absolute or square values of particle forces and velocities, the abundance of outliers in the set of samples used in eq. (20) is essential to yield valid results.
We finally illustrate possible improvements of the adaptive BD method itself, where several aspects come to mind. Firstly, the choices of the parameters in eq. (16) as well as and in eq. (17) were made mostly heuristically. We note that in Ref. 24, the influence of those parameters on the performance of the algorithm has been investigated, but emphasize that results could vary for our high-dimensional setting of Brownian motion.
Secondly, recently developed adaptation methods for the stepsize which employ control theory could be helpful. Thereby, is not merely scaled after each trial step by a locally determined factor as evaluated in eq. (16). Instead, one constructs a proportional-integral-derivative (PID) controller for the selection of new timesteps. This means that the adaptation is now not only influenced proportionally by the error estimate of the current step, but rather it is also determined by integral and differential contributions, i.e. the memory of previous stepsizes and the local change of the stepsize. Adaptive timestepping with control theory has already been applied to ODEs [75, 76, 77] and SDEs [40]. This method could help in our case to reduce the number of unneccessarily small steps even further. In our present implementation, a conservatively chosen in eq. (17) keeps the number of rejections low because it restricts the growth of after moves with coincidentally low error. But it also has the effect of preventing a fast relaxation of after a sudden drop, e.g. due to an unfortunate random event. Then, many steps are needed for to grow back to its nominal value, because the maximum gain in each step is limited by . With a control theory approach, a mechanism could be established which permits sudden drops to low values of but still ensures a rapid relaxation afterwards.
Finally, the adaptive BD method could be augmented to include hydrodynamic interactions. This requires a careful treatment of the random forces, as they now incorporate the particle configuration . Therefore, the noise is no longer additive, which complicates the numerical scheme necessary for a correct discretisation of the overdamped Langevin equation. Still, instead of eqs. (8) and (9), existing methods for SDEs with non-additive noise could be employed within the presented framework to yield an adaptive timestepping procedure for BD with hydrodynamic interactions.
In summary, to capture the dynamics of systems that undergo structural changes such as the ones shown above, and to gain further understanding of the implications regarding e.g. the resulting macroscopic properties, sophisticated simulation methods are needed. Adaptive BD provides the means to treat these problems with great accuracy while still being computationally feasible and robust, which is a crucial trait from a practical standpoint.
Acknowledgements.
We thank Christopher Rackauckas, Tobias Eckert and Daniel de las Heras for useful comments. This work is supported by the German Research Foundation (DFG) via project number 436306241.Data availability statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
- Frenkel and Smit [2002] D. Frenkel and B. Smit, Understanding Molecular Simulation: From Algorithms to Applications, 2nd ed., Computational Science Series No. 1 (Academic Press, San Diego, 2002).
- Hansen and McDonald [2013] J.-P. Hansen and I. R. McDonald, Theory of Simple Liquids: With Applications of Soft Matter, 4th ed. (Elsevier/AP, Amstersdam, 2013).
- Allen and Tildesley [2009] M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids, reprinted ed., Oxford Science Publications (Clarendon Press, Oxford, 2009).
- Karplus and McCammon [2002] M. Karplus and J. A. McCammon, “Molecular dynamics simulations of biomolecules,” Nat. Struct Biol. 9, 646–652 (2002).
- Squires and Brady [2005] T. M. Squires and J. F. Brady, “A simple paradigm for active and nonlinear microrheology,” Phys. Fluids 17, 073101 (2005).
- Reinhardt, Scacchi, and Brader [2014] J. Reinhardt, A. Scacchi, and J. M. Brader, “Microrheology close to an equilibrium phase transition,” J. Chem. Phys. 140, 144901 (2014).
- Puertas and Voigtmann [2014] A. M. Puertas and T. Voigtmann, “Microrheology of colloidal systems,” J. Phys. Condens. Matter 26, 243101 (2014).
- Geigenfeind, delas Heras, and Schmidt [2020] T. Geigenfeind, D. delas Heras, and M. Schmidt, “Superadiabatic demixing in nonequilibrium colloids,” Commun. Phys. 3, 23 (2020).
- Gillespie [1976] D. T. Gillespie, “A general method for numerically simulating the stochastic time evolution of coupled chemical reactions,” J. Comput. Phys. 22, 403–434 (1976).
- Andersen [1980] H. C. Andersen, “Molecular dynamics simulations at constant pressure and/or temperature,” J. Chem. Phys. 72, 2384–2393 (1980).
- Nosé [1984] S. Nosé, “A molecular dynamics method for simulations in the canonical ensemble,” Mol. Phys. 52, 255–268 (1984).
- Alder and Wainwright [1959] B. J. Alder and T. E. Wainwright, “Studies in Molecular Dynamics. I. General Method,” J. Chem. Phys. 31, 459–466 (1959).
- Scala, Voigtmann, and De Michele [2007] A. Scala, T. Voigtmann, and C. De Michele, “Event-driven Brownian dynamics for hard spheres,” J. Chem. Phys. 126, 134109 (2007).
- Colberg and Höfling [2011] P. H. Colberg and F. Höfling, “Highly accelerated simulations of glassy dynamics using GPUs: Caveats on limited floating-point precision,” Comput. Phys. Commun. 182, 1120–1129 (2011).
- Wang, Tay, and Swendsen [1999] J.-S. Wang, T. K. Tay, and R. H. Swendsen, “Transition Matrix Monte Carlo Reweighting and Dynamics,” Phys. Rev. Lett. 82, 476–479 (1999).
- Landau, Tsai, and Exler [2004] D. P. Landau, S.-H. Tsai, and M. Exler, “A new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling,” Am. J. Phys. 72, 1294–1302 (2004).
- de las Heras and Schmidt [2018] D. de las Heras and M. Schmidt, “Better Than Counting: Density Profiles from Force Sampling,” Phys. Rev. Lett. 120, 218001 (2018).
- Rotenberg [2020] B. Rotenberg, “Use the force! Reduced variance estimators for densities, radial distribution functions, and local mobilities in molecular simulations,” J. Chem. Phys. 153, 150902 (2020).
- Coles et al. [2021] S. W. Coles, E. Mangaud, D. Frenkel, and B. Rotenberg, “Reduced variance analysis of molecular dynamics simulations by linear combination of estimators,” J. Chem. Phys. 154, 191101 (2021).
- Fritsch et al. [2012] S. Fritsch, S. Poblete, C. Junghans, G. Ciccotti, L. Delle Site, and K. Kremer, “Adaptive resolution molecular dynamics simulation through coupling to an internal particle reservoir,” Phys. Rev. Lett. 108, 170602 (2012).
- Rossky, Doll, and Friedman [1978] P. J. Rossky, J. D. Doll, and H. L. Friedman, “Brownian dynamics as smart Monte Carlo simulation,” J. Chem. Phys. 69, 4628–4633 (1978).
- Jardat et al. [1999] M. Jardat, O. Bernard, P. Turq, and G. R. Kneller, “Transport coefficients of electrolyte solutions from Smart Brownian dynamics simulations,” J. Chem. Phys. 110, 7993–7999 (1999).
- Kloeden and Platen [1992] P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, 1st ed. (Springer-Verlag Berlin Heidelberg, 1992).
- Rackauckas and Nie [2017a] C. Rackauckas and Q. Nie, “Adaptive methods for stochastic differential equations via natural embeddings and rejection sampling with memory,” Discrete Contin. Dyn. Syst. - B 22, 2731–2761 (2017a).
- van der Kooij and Sprakel [2015] H. M. van der Kooij and J. Sprakel, “Watching paint dry; more exciting than it seems,” Soft Matter 11, 6353–6359 (2015).
- Schulz and Keddie [2018] M. Schulz and J. L. Keddie, “A critical and quantitative review of the stratification of particles during the drying of colloidal films,” Soft Matter 14, 6181–6197 (2018).
- Makepeace et al. [2017] D. K. Makepeace, A. Fortini, A. Markov, P. Locatelli, C. Lindsay, S. Moorhouse, R. Lind, R. P. Sear, and J. L. Keddie, “Stratification in binary colloidal polymer films: Experiment and simulations,” Soft Matter 13, 6969–6980 (2017).
- Trueman et al. [2012] R. E. Trueman, E. Lago Domingues, S. N. Emmett, M. W. Murray, J. L. Keddie, and A. F. Routh, “Autostratification in Drying Colloidal Dispersions: Experimental Investigations,” Langmuir 28, 3420–3428 (2012).
- He et al. [2021] B. He, I. Martin-Fabiani, R. Roth, G. I. Tóth, and A. J. Archer, “Dynamical Density Functional Theory for the Drying and Stratification of Binary Colloidal Dispersions,” Langmuir 37, 1399–1409 (2021).
- Fortini and Sear [2017] A. Fortini and R. P. Sear, “Stratification and Size Segregation of Ternary and Polydisperse Colloidal Suspensions during Drying,” Langmuir 33, 4796–4805 (2017).
- Burrage and Burrage [1996] K. Burrage and P. Burrage, “High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations,” Appl. Numer. Math. 22, 81–101 (1996).
- Rößler [2010] A. Rößler, “Runge–Kutta Methods for the Strong Approximation of Solutions of Stochastic Differential Equations,” SIAM J. Numer. Anal. 48, 922–952 (2010).
- Fehlberg [1970] E. Fehlberg, “Klassische Runge-Kutta-Formeln vierter und niedrigerer Ordnung mit Schrittweiten-Kontrolle und ihre Anwendung auf Wärmeleitungsprobleme,” Computing 6, 61–71 (1970).
- Rößler [2003] A. Rößler, “Embedded Stochastic Runge-Kutta Methods,” PAMM 2, 461–462 (2003).
- Ibe [2013] O. C. Ibe, Markov Processes for Stochastic Modeling (Elsevier, 2013).
- Gaines and Lyons [1997] J. G. Gaines and T. J. Lyons, “Variable Step Size Control in the Numerical Solution of Stochastic Differential Equations,” SIAM J. Appl. Math. 57, 1455–1484 (1997).
- Mauthner [1998] S. Mauthner, “Step size control in the numerical solution of stochastic differential equations,” J. Comput. Appl. Math. 100, 93–109 (1998).
- Burrage and Burrage [2002] P. M. Burrage and K. Burrage, “A Variable Stepsize Implementation for Stochastic Differential Equations,” Soc. Ind. Appl. Math. 24, 848–864 (2002).
- Lamba [2003] H. Lamba, “An adaptive timestepping algorithm for stochastic differential equations,” J. Comput. Appl. Math. 161, 417–430 (2003).
- Burrage, Herdiana, and Burrage [2004] P. Burrage, R. Herdiana, and K. Burrage, “Adaptive stepsize based on control theory for stochastic differential equations,” J. Comput. Appl. Math. 170, 317–336 (2004).
- Lamba, Mattingly, and Stuart [2006] H. Lamba, J. C. Mattingly, and A. M. Stuart, “An adaptive Euler-Maruyama scheme for SDEs: Convergence and stability,” IMA J. Numer. Anal. 27, 479–506 (2006).
- Sotiropoulos and Kaznessis [2008] V. Sotiropoulos and Y. N. Kaznessis, “An adaptive time step scheme for a system of stochastic differential equations with multiple multiplicative noise: Chemical Langevin equation, a proof of concept,” J. Chem. Phys. 128, 014103 (2008).
- Ilie [2012] S. Ilie, “Variable time-stepping in the pathwise numerical solution of the chemical Langevin equation,” J. Chem. Phys. 137, 234110 (2012).
- Stuart and Humphries [1996] A. M. Stuart and A. R. Humphries, Dynamical Systems and Numerical Analysis, Cambridge Monographs on Applied and Computational Mathematics No. 2 (Cambridge University Press, Cambridge ; New York, 1996).
- Honeycutt [1992] R. L. Honeycutt, “Stochastic Runge-Kutta algorithms. I. White noise,” Phys. Rev. A 45, 600–603 (1992).
- Fixman [1986] M. Fixman, “Implicit algorithm for Brownian dynamics of polymers,” Macromolecules 19, 1195–1204 (1986).
- Iniesta and de la Torre [1990] A. Iniesta and J. de la Torre, “A Second-order algorithm for the simulation of the Brownian dynamics of macromolecular models,” J. Chem. Phys. 92, 2015–2018 (1990).
- Heyes and Brańka [2000] D. M. Heyes and A. C. Brańka, “More efficient Brownian dynamics algorithms,” Mol. Phys. 98, 1949–1960 (2000).
- Note [1] This issue has been discussed in https://github.com/SciML/DiffEqNoiseProcess.jl/pull/80.
- Rackauckas and Nie [2017b] C. Rackauckas and Q. Nie, “DifferentialEquations.jl – A Performant and Feature-Rich Ecosystem for Solving Differential Equations in Julia,” J. Open Res. Softw. 5, 15 (2017b).
- Note [2] The corresponding implementation in C++ can be found in https://gitlab.uni-bayreuth.de/bt306964/mbd.
- Yeomans-Reyna et al. [2003] L. Yeomans-Reyna, H. Acuña-Campa, F. d. J. Guevara-Rodríguez, and M. Medina-Noyola, “Self-consistent theory of collective Brownian dynamics: Theory versus simulation,” Phys. Rev. E 67, 021108 (2003).
- Archer, Hopkins, and Schmidt [2007] A. J. Archer, P. Hopkins, and M. Schmidt, “Dynamics in inhomogeneous liquids and glasses via the test particle limit,” Phys. Rev. E 75, 040501 (2007).
- Hopkins et al. [2010] P. Hopkins, A. Fortini, A. J. Archer, and M. Schmidt, “The van Hove distribution function for Brownian hard spheres: Dynamical test particle theory and computer simulations for bulk dynamics,” J. Chem. Phys. 133, 224505 (2010).
- Stopper, Roth, and Hansen-Goos [2016] D. Stopper, R. Roth, and H. Hansen-Goos, “Structural relaxation and diffusion in a model colloid-polymer mixture: Dynamical density functional theory and simulation,” J. Phys. Condens. Matter 28, 455101 (2016).
- Stopper et al. [2018] D. Stopper, A. L. Thorneywork, R. P. A. Dullens, and R. Roth, “Bulk dynamics of Brownian hard disks: Dynamical density functional theory versus experiments on two-dimensional colloidal hard spheres,” J. Chem. Phys. 148, 104501 (2018).
- Treffenstädt and Schmidt [2021] L. L. Treffenstädt and M. Schmidt, “Universality in Driven and Equilibrium Hard Sphere Liquid Dynamics,” Phys. Rev. Lett. 126, 058002 (2021).
- Fortini et al. [2016] A. Fortini, I. Martín-Fabiani, J. L. De La Haye, P.-Y. Dugas, M. Lansalot, F. D’Agosto, E. Bourgeat-Lami, J. L. Keddie, and R. P. Sear, “Dynamic Stratification in Drying Films of Colloidal Mixtures,” Phys. Rev. Lett. 116, 118301 (2016).
- Laurati et al. [2012] M. Laurati, K. J. Mutch, N. Koumakis, J. Zausch, C. P. Amann, A. B. Schofield, G. Petekidis, J. F. Brady, J. Horbach, M. Fuchs, and S. U. Egelhaaf, “Transient dynamics in dense colloidal suspensions under shear: Shear rate dependence,” J. Phys. Condens. Matter 24, 464104 (2012).
- Jahreis and Schmidt [2020] N. Jahreis and M. Schmidt, “Shear-induced deconfinement of hard disks,” Colloid Polym Sci 298, 895–906 (2020).
- Archer and Malijevský [2011] A. J. Archer and A. Malijevský, “On the interplay between sedimentation and phase separation phenomena in two-dimensional colloidal fluids,” Mol. Phys. 109, 1087–1099 (2011).
- Royall et al. [2007] C. P. Royall, J. Dzubiella, M. Schmidt, and A. van Blaaderen, “Nonequilibrium Sedimentation of Colloids on the Particle Scale,” Phys. Rev. Lett. 98, 188304 (2007).
- Bleibel et al. [2011] J. Bleibel, S. Dietrich, A. Domínguez, and M. Oettel, “Shock Waves in Capillary Collapse of Colloids: A Model System for Two-Dimensional Screened Newtonian Gravity,” Phys. Rev. Lett. 107, 128302 (2011).
- Carpen and Brady [2005] I. C. Carpen and J. F. Brady, “Microrheology of colloidal dispersions by Brownian dynamics simulations,” J. Rheol. 49, 1483–1502 (2005).
- Löwen, Hansen, and Roux [1991] H. Löwen, J.-P. Hansen, and J.-N. Roux, “Brownian dynamics and kinetic glass transition in colloidal suspensions,” Phys. Rev. A 44, 1169–1181 (1991).
- Archer, Rucklidge, and Knobloch [2015] A. J. Archer, A. M. Rucklidge, and E. Knobloch, “Soft-core particles freezing to form a quasicrystal and a crystal-liquid phase,” Phys. Rev. E 92, 012324 (2015).
- Archer, Rucklidge, and Knobloch [2013] A. J. Archer, A. M. Rucklidge, and E. Knobloch, “Quasicrystalline Order and a Crystal-Liquid State in a Soft-Core Fluid,” Phys. Rev. Lett. 111, 165501 (2013).
- Volpe, Gigan, and Volpe [2014] G. Volpe, S. Gigan, and G. Volpe, “Simulation of the active Brownian motion of a microswimmer,” Am. J. Phys. 82, 659 (2014).
- Farage, Krinninger, and Brader [2015] T. F. F. Farage, P. Krinninger, and J. M. Brader, “Effective interactions in active Brownian suspensions,” Phys. Rev. E 91, 042310 (2015).
- Paliwal et al. [2017] S. Paliwal, V. Prymidis, L. Filion, and M. Dijkstra, “Non-equilibrium surface tension of the vapour-liquid interface of active Lennard-Jones particles,” J. Chem. Phys. 147, 084902 (2017).
- Paliwal et al. [2018] S. Paliwal, J. Rodenburg, R. van Roij, and M. Dijkstra, “Chemical potential in active systems: Predicting phase equilibrium from bulk equations of state?” New J. Phys. 20, 015003 (2018).
- Omar et al. [2021] A. K. Omar, K. Klymko, T. GrandPre, and P. L. Geissler, “Phase Diagram of Active Brownian Spheres: Crystallization and the Metastability of Motility-Induced Phase Separation,” Phys. Rev. Lett. 126, 188002 (2021).
- Turci and Wilding [2021] F. Turci and N. B. Wilding, “Phase Separation and Multibody Effects in Three-Dimensional Active Brownian Particles,” Phys. Rev. Lett. 126, 038002 (2021).
- Baul and Dzubiella [2021] U. Baul and J. Dzubiella, “Structure and dynamics of responsive colloids with dynamical polydispersity,” J. Phys. Condens. Matter 33, 174002 (2021).
- Gustafsson [1994] K. Gustafsson, “Control-theoretic techniques for stepsize selection in implicit Runge-Kutta methods,” ACM Trans. Math. Softw. 20, 496–517 (1994).
- Söderlind [2002] G. Söderlind, “Automatic Control and Adaptive Time-Stepping,” Numer. Algorithms 31, 281–310 (2002).
- Söderlind [2003] G. Söderlind, “Digital filters in adaptive time-stepping,” ACM Trans. Math. Softw. 29, 1–26 (2003).
- Verlet [1967] L. Verlet, “Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules,” Phys. Rev. 159, 98–103 (1967).
- Swope et al. [1982] W. C. Swope, H. C. Andersen, P. H. Berens, and K. R. Wilson, “A computer simulation method for the calculation of equilibrium constants for the formation of physical clusters of molecules: Application to small water clusters,” J. Chem. Phys. 76, 637–649 (1982).
- Milstein [2003] G. N. Milstein, “Quasi-symplectic methods for Langevin-type equations,” IMA J. Numer. Anal. 23, 593–626 (2003).
- Burrage, Lenane, and Lythe [2007] K. Burrage, I. Lenane, and G. Lythe, “Numerical Methods for Second-Order Stochastic Differential Equations,” SIAM J. Sci. Comput. 29, 245–264 (2007).
- van Gunsteren and Berendsen [1982] W. van Gunsteren and H. Berendsen, “Algorithms for brownian dynamics,” Mol. Phys. 45, 637–647 (1982).
- Brünger, Brooks, and Karplus [1984] A. Brünger, C. L. Brooks, and M. Karplus, “Stochastic boundary conditions for molecular dynamics simulations of ST2 water,” Chem. Phys. Lett. 105, 495–500 (1984).
- Skeel and Izaguirre [2002] R. D. Skeel and J. A. Izaguirre, “An impulse integrator for Langevin dynamics,” Mol. Phys. 100, 3885–3891 (2002).
- Goga et al. [2012] N. Goga, A. J. Rzepiela, A. H. de Vries, S. J. Marrink, and H. J. C. Berendsen, “Efficient Algorithms for Langevin and DPD Dynamics,” J. Chem. Theory Comput. 8, 3637–3649 (2012).
- Leimkuhler and Matthews [2015] B. Leimkuhler and C. Matthews, Molecular Dynamics, Interdisciplinary Applied Mathematics, Vol. 39 (Springer International Publishing, Cham, 2015).
Appendix A Generalization to Langevin dynamics
In the following, we transfer the concepts of Sec. III to general (non-overdamped) Langevin dynamics. Thereby, the momenta of particles with masses , , are explicitly considered in the Langevin equation
(24) |
where the notation is that of eq. (1). Depending on the choice of , two special cases can be identified. BD is recovered in the diffusive regime for large . On the other hand, for vanishing friction and random forces, i.e. , deterministic Hamiltonian equations of motion are obtained.
The SDE (24) can be reformulated as a pair of first-order equations for the particle positions and velocities ,
(25a) | ||||
(25b) |
From a numerical perspective, the treatment of eqs. (25) differs significantly from that of the overdamped Langevin equation (1), since the second-order nature makes more involved timestepping procedures for and possible.
This can be illustrated easily when eqs. (25) are considered in the Hamiltonian case. Then, via the separation of and , simple semi-implicit methods are readily available, such as the Euler-Cromer or the well-known velocity Verlet method which is commonly used in MD [78, 79]. From a fundamental point of view, the use of symplectic algorithms is necessary for Hamiltonian systems to ensure the correct description of physical conservation laws and hence to achieve numerical stability for long-time behaviour. While the above mentioned integrators possess this property, most explicit algorithms like the forward Euler and classic Runge-Kutta method fail in this regard.
When going from Hamiltonian to Langevin dynamics, one is faced with an appropriate generalization of the familiar deterministic integration schemes to include dissipative and random forces. This is addressed by so-called quasi-symplectic methods, which are integrators for SDEs of type (25) that degenerate to symplectic ones when both the noise and the friction term vanish [80]. Still, it is not straightforward to construct schemes that are suitable for arbitrary , and different strategies have been used in Langevin-based molecular simulations [81, 82, 83, 84, 85, 86].
To incorporate adaptive timestepping, one must again choose an integrator pair of different order to calculate an error estimation per step. The above considerations suggest however, that at least for the higher-order method, a quasi-symplectic scheme is advisable for optimal efficiency and accuracy as it is expected to perform better than a naive application of the embedded Heun-Euler method (8) and (9) to general Langevin dynamics. Nevertheless, the latter could still be an adequate option when friction dominates.
Appendix B Correctness of RSwM3 rejection branch
Although the effect is likely subtle or even unmeasurable in most situations, in the original RSwM3 algorithm as described in Ref. 24 and formerly implemented in DifferentialEquations.jl [50], the branch was handled incorrectly. We list a pseudocode version thereof in alg. 1, where a specialization to our case of BD is already performed for ease of comparison with alg. 2.
To see why alg. 1 fails in some cases, consider multiple elements present on the stack , e.g. as depicted in fig. 9 where contains three elements. Then in lines 2–11 of alg. 1, elements are transferred successively from back to and their time intervals are accumulated in as long as the remainder is still larger than the next goal timestep (this is only the case for the green element in fig. 9). After that, the Brownian bridge theorem is applied to . Therefore, the interpolation includes all the remaining elements of the stack , i.e. the blue and violet one in fig. 9, which implies that intermediate values of the Brownian path are not considered if holds two or more elements at this point.
To yield a valid interpolation in this situation, the Brownian bridge must instead only be applied to the immediately following single element at the top of (the blue one in fig. 9). In alg. 2, lines 4–19 therefore replace the logic of alg. 1, which fixes the above issue by always considering only the single element that crosses in the application of eq. (7). This element is then interpolated at , for which the variable is introduced to yield the corresponding fraction of the element. Note that always contains at least one element when entering the rejection branch (assuming a proper initialization of the simulation, cf. Appendix C).
Appendix C Implementation of embedded Heun-Euler trial step with RSwM3
The embedded Heun-Euler trial step with RSwM3 is the integral part of the adaptive BD method, for which we provide a pseudocode implementation in the following and explain its technical details as well as its setting in a full simulation framework. At the beginning of the simulation, must be empty and a finite positive value of the first timestep is chosen heuristically (this choice is irrelevant, however, since the algorithm immediately adapts to acceptable values). Gaussian random increments are drawn for the initial trial step via which are pushed onto the stack . This completes the initialization of the simulation run and a loop over trial steps can be started (until a certain simulation time or number of steps is reached).
Then, in each trial step, which is listed in alg. 2, the Euler and Heun approximations and are evaluated via eqs. (8) and (9) and the adaptation factor is calculated via eqs. (16) and (17). This requires an error estimate (12) which is obtained from the two approximations via eqs. (10) and (11).
If , the proposed step is rejected and a retrial must be performed. For a detailed description of the rejection branch (lines 4–19), we refer to Appendix B, where our changes to the original RSwM3 case are illustrated as well. If a step is accepted, i.e. , the particle positions and physical time of the system are updated accordingly (line 21) before resetting random increments and stack (line 22). Then, in lines 23–42, new random increments are constructed for the next trial step.
For this, elements on stemming from previously rejected steps have to be accounted for as long as parts of them lie within the goal timestep . Therefore, the elements are transferred successively from to and their time intervals and random increments are accumulated in and . If at some point exceeds , the corresponding element is interpolated at via the Brownian bridge theorem (7) in lines 29–34 and one proceeds with the next trial step. Conversely, if all elements of were popped and a gap remains between and the goal timestep , new Gaussian random increments are drawn, added to , and pushed onto in lines 39–41 before attempting the next trial step. Note that the values of persist across trial steps and are only reset after accepted moves.
Throughout the algorithm, bookkeeping of random increments and corresponding time intervals is established by pushing those elements that are involved in the current random increment onto , and by storing elements that become relevant beyond the current step on . This procedure is especially important after bridging an element at the goal timestep, where the two resulting parts are pushed onto and respectively (lines 14, 15 and 31, 32). Consequently, no drawn random increments are ever lost.