This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Adaptive Brownian Dynamics

Florian Sammüller    Matthias Schmidt Matthias.Schmidt@uni-bayreuth.de Theoretische Physik II, Physikalisches Institut, Universität Bayreuth, D-95447 Bayreuth, Germany
(July 3, 2025)
Abstract

A framework for performant Brownian Dynamics (BD) many-body simulations with adaptive timestepping is presented. Contrary to the Euler-Maruyama scheme in common non-adaptive BD, we employ an embedded Heun-Euler integrator for the propagation of the overdamped coupled Langevin equations of motion. This enables the derivation of a local error estimate and the formulation of criteria for the acceptance or rejection of trial steps and for the control of optimal stepsize. Introducing erroneous bias in the random forces is avoided by Rejection Sampling with Memory (RSwM) due to Rackauckas and Nie, which makes use of the Brownian bridge theorem and guarantees the correct generation of a specified random process even when rejecting trial steps. For test cases of Lennard-Jones fluids in bulk and in confinement, it is shown that adaptive BD solves performance and stability issues of conventional BD, already outperforming the latter even in standard situations. We expect this novel computational approach to BD to be especially helpful in long-time simulations of complex systems, e.g. in non-equilibrium, where concurrent slow and fast processes occur.

I Introduction

Computer simulations have long become an established tool in the investigation of physical phenomena [1, 2, 3]. Complementing experimental results, they build the foundation for the exploration of increasingly complex dynamical systems. From the standpoint of classical statistical mechanics, the simulation of a many-body system consisting of discrete interacting particles can reveal information about its structural correlation as well as its thermodynamic properties. Naturally, this opens up the possibility of tackling many problems in the fields of material science, soft matter and biophysics, such as investigating the dynamics of macromolecules [4], predicting rheological properties of fluids [5, 6, 7], or exploring non-equilibrium processes which occur e.g. in colloidal suspensions under the influence of external forcing [8].

With the ever-increasing capabilities of computer hardware, a variety of different computational methods have emerged since the middle of the last century. Conceptually, at least three distinct classes of particle-based simulation frameworks can be identified: i) Monte-Carlo (MC), which relies on the stochastic exploration of phase space, ii) Molecular Dynamics (MD), in which the set of ordinary differential equations (ODEs) of Hamiltonian dynamics is integrated to obtain particle trajectories, and iii) Langevin Dynamics, where random processes are incorporated into the Newtonian equations of motion so that the evolution of a system is obtained by numerical integration of then stochastic differential equations (SDEs). Brownian Dynamics (BD) can be seen as a special case of iii), since the underlying stochastic Langevin equation is thereby considered in the overdamped limit where particle inertia vanishes and only particle coordinates remain as the sole microscopic degrees of freedom.

Notably, a broad range of refined methods have been developed in all three categories, sometimes even intersecting those. Important examples of such extensions are kinetic Monte-Carlo for the approximation of time evolution from reaction rates [9], the addition of thermostats in MD to model thermodynamic coupling [10, 11], event-driven algorithms which enable both MD and BD in hard-particle systems [12, 13], and the adaptation of molecular algorithms to modern hardware [14]. Improvements in the calculation of observables from the resulting particle configurations have been made as well, e.g. by modifying their generation in MC (umbrella sampling, transition matrix MC [15], Wang-Landau sampling [16]) or by utilizing advanced evaluation schemes in MD and BD, such as force sampling [17, 18, 19] or adaptive resolution MD [20].

The efficiency and accuracy of a certain algorithm are always primary concerns, as these properties are essential for applicability and practicability in real-world problems. One therefore aims to design procedures that are both as fast and as precise as possible – yet it is no surprise that those two goals might often be conflicting. Especially in BD, where stochastic processes complicate the numerical treatment, the development of more sophisticated algorithms apparently lacks behind that of MD, for example, and one often resorts to alternative or combined simulation techniques [21, 22]. If the full dynamical description of BD is indeed considered, the equations of motion are usually integrated with the simple Euler-Maruyama method [23], where stochasticity is accounted for in each equidistant step via normally distributed random numbers. This can lead to inaccuracies and stability problems, making BD seem inferior to other computational methods.

In this work, we propose a novel approach to BD simulations which rectifies the above shortcomings of conventional BD. To achieve this, we employ an adaptive timestepping algorithm that enables the control of the numerical error as follows. The common Euler-Maruyama method is complemented with a higher-order Heun step to obtain an embedded integrator pair for an estimation of the local discretisation error per trial step. By comparison of this error estimate with a prescribed tolerance, the trial step is either accepted or it is rejected and then retried with a smaller stepsize. Particular care is required after rejections so as to not introduce a bias in the random forces which would violate their desired properties. We therefore use Rejection Sampling with Memory (RSwM) [24] to retain a Gaussian random process even in a scenario where already determined random increments may conditionally be discarded. RSwM is a recently developed algorithm for the adaptive generation of random processes in the numerical solution of SDEs, which we improve and specialize to our context of overdamped Brownian motion and thereby formulate a method for adaptive BD simulations.

We demonstrate the practical advantages of adaptive BD over common BD in simulation results for prototypical bulk equilibrium systems and for more involved cases in non-equilibrium. A notable example which we investigate is the drying of colloidal films at planar surfaces. Especially when dealing with non-trivial mixtures, as e.g. present in common paints and coatings, the dynamics of this process can be inherently complex and its quantitative description turns out to be a major challenge [25, 26]. This stands in contrast to the necessity of understanding and predicting stratification processes in those systems. Stratification leads to a dried film that has multiple layers differing in the concentration of constituent particle species, thereby influencing macroscopic properties of the resulting colloidal film. Therefore, controlling this process is an important measure to tailor colloidal suspensions to their field of application. Advances in this area have been made experimentally [27, 28], by utilizing functional many-body frameworks like dynamical density functional theory (DDFT) [29], and with molecular simulations such as conventional BD [30]. By employing the adaptive BD method, we are able to capture the complex dynamical processes occuring in those systems even in the final dense state. Close particle collisions and jammed states are resolved with the required adjustment of the timestep, necessary for the stability and accuracy of the simulation run in those circumstances. This can not be achieved easily with common BD.

This paper is structured as follows. In Sec. II, a brief and non-rigorous mathematical introduction to the numerical solution of SDEs is given. Particularly, we illustrate the prerequisites for adaptive and higher-order algorithms in the case of general SDEs and emphasize certain pitfalls. In Sec. III, these considerations are applied to the case of Brownian motion. We construct the embedded Heun-Euler integration scheme in Sec. III.1 and incorporate RSwM in Sec. III.2, which yields the adaptive BD method. Observables can then be sampled from the resulting particle trajectories with the means illustrated in Sec. III.3. In Sec. IV, simulation results of the above mentioned Lennard-Jones systems are shown and the practical use of adaptive BD is confirmed. In Sec. V, we conclude with a summary of the shown concepts, propose possible improvements for the adaptation of timesteps, and present further ideas for use cases.

II Numerics of Stochastic Differential Equations

Brownian Dynamics of a classical many-body system of NN particles in dd spatial dimensions with positions 𝐫N(t)=(𝐫(1)(t),,𝐫(N)(t))\mathbf{r}^{N}(t)=(\mathbf{r}^{(1)}(t),\dots,\mathbf{r}^{(N)}(t)) at time tt and temperature TT is described by the overdamped Langevin equation. The trajectory of particle ii satisfies

𝐫˙(i)(t)=1γ(i)𝐅(i)(𝐫N(t))+2kBTγ(i)𝐑(i)(t)\dot{\mathbf{r}}^{(i)}(t)=\frac{1}{\gamma^{(i)}}\mathbf{F}^{(i)}(\mathbf{r}^{N}(t))+\sqrt{\frac{2k_{B}T}{\gamma^{(i)}}}\mathbf{R}^{(i)}(t) (1)

where 𝐅(i)(𝐫N(t))\mathbf{F}^{(i)}(\mathbf{r}^{N}(t)) is the total force (composed of external and interparticle contributions) acting on particle ii, γ(i)\gamma^{(i)} is the friction coefficient of particle ii and kBk_{B} is Boltzmann’s constant; the dot denotes a time derivative. In eq. (1), the right-hand side consists of a deterministic (first summand) and a random contribution (second summand). The random forces are modeled via multivariate Gaussian white noise processes 𝐑(i)(t)\mathbf{R}^{(i)}(t) which satisfy

𝐑(i)(t)\displaystyle\langle\mathbf{R}^{(i)}(t)\rangle =0,\displaystyle=0, (2)
𝐑(i)(t)𝐑(j)(t)\displaystyle\langle\mathbf{R}^{(i)}(t)\mathbf{R}^{(j)}(t^{\prime})\rangle =𝐈δijδ(tt),\displaystyle=\mathbf{I}\delta_{ij}\delta(t-t^{\prime}), (3)

where \langle\cdot\rangle denotes an average over realizations of the random process, 𝐈\mathbf{I} is the d×dd\times d unit matrix, δij\delta_{ij} denotes the Kronecker delta, and δ()\delta(\cdot) is the Dirac delta function.

One can recognize that eq. (1) has the typical form of an SDE

dX(t)=f(X(t),t)dt+g(X(t),t)dW(t),\mathop{}\!\mathrm{d}X(t)=f(X(t),t)\mathop{}\!\mathrm{d}t+g(X(t),t)\mathop{}\!\mathrm{d}W(t), (4)

if the dependent random variable XX is identified with the particle positions 𝐫N\mathbf{r}^{N} and WW is a Wiener process corresponding to the integral of the Gaussian processes 𝐑N=(𝐑(1),,𝐑(N))\mathbf{R}^{N}=(\mathbf{R}^{(1)},\dots,\mathbf{R}^{(N)}). As we do not consider hydrodynamic interactions, the random forces in eq. (1) are obtained by a mere scaling of 𝐑(i)(t)\mathbf{R}^{(i)}(t) with the constant prefactors 2kBT/γ(i)\sqrt{2k_{B}T/\gamma^{(i)}}. Therefore, the noise in BD is additive, since g(X(t),t)=const.g(X(t),t)=\mathrm{const.} in the sense of eq. (4). This is a crucial property for the construction of a simple higher-order integrator below in Sec. III.

In computer simulations, particle trajectories are obtained from eq. (1) by numerical integration. Contrary to the numerics of ODEs, where higher-order schemes and adaptivity are textbook material, the derivation of corresponding methods for SDEs poses several difficulties which we address below. Due to the complications, SDEs of type (4) are often integrated via the Euler-Maruyama method instead, which follows the notion of the explicit Euler scheme for ODEs. Thereby, the true solution of eq. (4) with initial value X(0)=X0X(0)=X_{0} is approximated in t[0,T]t\in[0,T] by partitioning the time interval into nn equidistant subintervals of length Δt=T/n\Delta t=T/n. Then, for 0k<n0\leq k<n, a timestep is defined by

Xk+1=Xk+f(Xk,tk)Δt+g(Xk,tk)ΔWkX_{k+1}=X_{k}+f(X_{k},t_{k})\Delta t+g(X_{k},t_{k})\Delta W_{k} (5)

with Wiener increments ΔWk\Delta W_{k}. An Euler-Maruyama step is also incorporated in the adaptive BD method which we construct below, applying eq. (5) to the overdamped Langevin equation (1).

Crucially, the random increments ΔWk\Delta W_{k} in each Euler-Maruyama step (5) have to be constructed from independent and identically distributed normal random variables with expectation value E(ΔWk)=0\mathop{}\!\mathrm{E}(\Delta W_{k})=0 and variance Var(ΔWk)=Δt\mathop{}\!\mathrm{Var}(\Delta W_{k})=\Delta t. In practice, this is realized by drawing a new random number RR (or vector thereof) from a pseudo-random number generator obeying the normal distribution 𝒩(0,Δt)=Δt𝒩(0,1)\mathcal{N}(0,\Delta t)=\sqrt{\Delta t}\mathcal{N}(0,1) in each step kk and setting ΔWk=R\Delta W_{k}=R in eq. (5). The process of obtaining such a scalar (or vectorial) random increment RR will be denoted in the following by

R𝒩(μ,η)R\sim\mathcal{N}(\mu,\eta) (6)

where 𝒩(μ,η)\mathcal{N}(\mu,\eta) is a scalar (or multivariate) normal distribution with expectation value μ\mu and variance η\eta.

As in the case of ODEs, an important measure for the quality of an integration method is its order of convergence. However, what convergence exactly means in the case of SDEs must be carefully reconsidered due to their stochasticity. We refer the reader to the pertinent literature (see e.g. Ref. 23) and only summarize the key concepts and main results in the following.

Since both the approximation XkX_{k} and the true solution X(tk)X(t_{k}) are random variables, one can define two distinct convergence criteria. For a certain method with discretisation Δt0\Delta t\rightarrow 0, weak convergence captures the error of average values whereas strong convergence assesses the error of following a specific realization of a random process. One can show that the Euler-Maruyama method has a strong convergence order of 0.5, i.e. when increasing or decreasing the stepsize Δt\Delta t, the error of the numerical solution only scales with Δt0.5\Delta t^{0.5}. For general g(X(t),t)g(X(t),t) in eq. (4), the construction of schemes with higher strong order is complicated due to the occurence of higher-order stochastic integrals. Practically, this means that the careful evaluation of additional random variables is necessary in each iteration step. These random variables then enable the approximation of the stochastic integrals. There exist schemes of Runge-Kutta type with strong orders up to 2, although only strong order 1 and 1.5 Runge-Kutta methods are mostly used due to practical concerns [31, 32].

In order to incorporate adaptivity, one needs a means of comparison of two integration schemes of different strong order to formulate a local error estimate for the proposed step. If the error that occurs in a specific timestep is too large (we define below precisely what we mean by that), the step is rejected and a retrial with a smaller value of Δt\Delta t is performed. Otherwise, the step is accepted and, based on the previously calculated error, a new optimized stepsize is chosen for the next step, which hence can be larger than the previous one. This protocol makes an optimal and automatic control of the stepsize possible, meaning that Δt\Delta t can both be reduced when the propagation of the SDE is difficult and it can be increased when the error estimate allows to do so. Similar to the case of ODEs, it is computationally advantageous to construct so-called embedded integration schemes, analogous to e.g. Runge-Kutta-Fehlberg integrators [33], which minimize the number of costly evaluations of the right-hand side of eq. (4). Developments have been made in this direction e.g. with embedded stochastic Runge-Kutta methods [34].

There is still one caveat to consider when rejecting a step, in that one has to be careful to preserve the properties of the Wiener process. In the naive approach of simply redrawing new uncorrelated random increments, rejections would alter the random process implicitly. The reason lies in the introduction of an undesired bias. Since large random increments (generally causing larger errors) get rejected more often, the variance of the Wiener process would be systematically decreased, ultimately violating its desired properties. To avoid this effect, it must be guaranteed that once a random increment is chosen, it will not be discarded until the time interval it originally applied to has passed. Consequently, when rejecting a trial step and retrying the numerical propagation of the SDE with smaller time intervals, new random increments cannot be drawn independently for those substeps anymore. The new random increments must instead be created based on the rejected original timestep such that an unbiased Brownian path is still followed.

The formal framework to the above procedure is given by the so-called Brownian bridge theorem [35], which interpolates a Wiener process between known values at two timepoints. If W(0)=0W(0)=0 and W(Δt)=RW(\Delta t)=R are given (e.g. due to the previous rejection of a timestep of length Δt\Delta t where the random increment RR has been drawn), then a new intermediate random value must be constructed by

W(qΔt)𝒩(qR,(1q)qΔt),0<q<1,W(q\Delta t)\sim\mathcal{N}(qR,(1-q)q\Delta t),\quad 0<q<1, (7)

such that the statistical properties of the to-be-simulated Wiener process are left intact and a substep qΔtq\Delta t can be tried. The value of qq thereby sets the fraction of the original time interval to which the Wiener process shall be interpolated. Equation (7) extends naturally (i.e. component-wise) to the multivariate case and it hence enables the construction of interpolating random vectors in a straightforward manner.

With this idea in mind, several adaptive timestepping algorithms for SDEs have been designed [36, 37, 38, 39, 40, 41, 42]. Still, most of these approaches are quite restrictive in the choice of timesteps (e.g. only allowing discrete variations such as halving or doubling) [36, 42], involve the deterministic or random part only separately into an a-priori error estimation [39, 41, 43], or store rejected timesteps in a costly manner, e.g. in the form of Brownian trees [36]. In particular, the above methods rely on precomputed Brownian paths and do not illustrate an ad hoc generation, which is desirable from a performance and memory consumption standpoint in a high-dimensional setting like BD.

In contrast, a very flexible and performant class of adaptive timestepping algorithms called Rejection Sampling with Memory (RSwM) has recently been developed by Rackauckas and Nie [24]. Their work provides the arguably optimal means for the adaptive numerical solution of SDEs while still being computationally efficient in the generation of random increments as well as in the handling of rejections. We therefore use RSwM in the construction of an adaptive algorithm and specialize the method to Brownian motion in the following.

III Application to Brownian Dynamics

Based on the remarks of Sec. II, we next proceed to apply the general framework to the case of BD with the overdamped Langevin equation (1) forming the underlying SDE. An embedded integration scheme is constructed which allows the derivation of an error estimate and an acceptance criterion in each step. Furthermore, the application of RSwM for handling rejected timesteps in BD is shown and discussed. We also illustrate how the calculation of observables from sampling of phase space functions has to be altered in a variable timestep scenario.

III.1 Embedded Heun-Euler method

Regarding the overdamped Langevin equation (1), a major simplification exists compared to the general remarks made in Sec. II. Due to the noise term being trivial, some higher-order schemes can be constructed by only evaluating the deterministic forces for different particle configurations. Crucially, no iterated stochastic integrals are needed, which would have to be approximated in general higher strong-order integrators by using additional random variables [32]. In the following, we apply a scheme similar to the one suggested by Lamba, Mattingly, and Stuart [41] for general SDEs to eq. (1) and term it embedded Heun-Euler method due to its resemblance to the corresponding ODE integrators. Two different approximations 𝐫¯k+1N\bar{\mathbf{r}}^{N}_{k+1} and 𝐫k+1N\mathbf{r}^{N}_{k+1} are calculated in each trial step by

𝐫¯k+1(i)\displaystyle\bar{\mathbf{r}}^{(i)}_{k+1} =𝐫k(i)+1γ(i)𝐅(i)(𝐫kN)Δtk+2kBTγ(i)𝐑k(i),\displaystyle=\mathbf{r}^{(i)}_{k}+\frac{1}{\gamma^{(i)}}\mathbf{F}^{(i)}(\mathbf{r}^{N}_{k})\Delta t_{k}+\sqrt{\frac{2k_{B}T}{\gamma^{(i)}}}\mathbf{R}^{(i)}_{k}, (8)
𝐫k+1(i)=𝐫k(i)+12γ(i)(𝐅(i)(𝐫kN)+𝐅(i)(𝐫¯k+1N))Δtk+2kBTγ(i)𝐑k(i).\displaystyle\begin{split}\mathbf{r}^{(i)}_{k+1}&=\mathbf{r}^{(i)}_{k}+\frac{1}{2\gamma^{(i)}}\left(\mathbf{F}^{(i)}(\mathbf{r}^{N}_{k})+\mathbf{F}^{(i)}(\bar{\mathbf{r}}^{N}_{k+1})\right)\Delta t_{k}\\ &\qquad+\sqrt{\frac{2k_{B}T}{\gamma^{(i)}}}\mathbf{R}^{(i)}_{k}.\end{split} (9)

Equation (8) is the conventional Euler-Maruyama step, and hence constitutes the application of eq. (5) to the overdamped Langevin equation (1). Equation (9) resembles the second order Heun algorithm or midpoint scheme for ODEs [44] and has been formally derived for SDEs in the context of stochastic Runge-Kutta methods [45]. Since the deterministic forces 𝐅N\mathbf{F}^{N} are evaluated at the initial particle configuration 𝐫kN\mathbf{r}_{k}^{N} both in eq. (8) and eq. (9), we have constructed an embedded integration method. This is favorable regarding computational cost, since the numerical result of 𝐅N(𝐫kN)\mathbf{F}^{N}(\mathbf{r}_{k}^{N}) is evaluated once in eq. (8) and reused in eq. (9). Only one additional computation of the deterministic forces at the intermediate particle configuration 𝐫¯k+1N\bar{\mathbf{r}}_{k+1}^{N} is then needed in the Heun step (9).

In each trial step, the same realization of random vectors 𝐑kN\mathbf{R}^{N}_{k} must be used in both eqs. (8) and (9). Recall that the random displacements have to obey the properties of multivariate Wiener increments to model the non-deterministic forces. In conventional BD with fixed stepsize Δtk=Δt=const.\Delta t_{k}=\Delta t=\mathrm{const.}, these random vectors can therefore be drawn independently via 𝐑k(i)𝒩(0,Δt)\mathbf{R}^{(i)}_{k}\sim\mathcal{N}(0,\Delta t) for each particle ii. If rejections of trial steps are possible, 𝐑k(i)\mathbf{R}^{(i)}_{k} must instead be constructed as described in Sec. III.2.

When the embedded Heun-Euler step (8) and (9) is applied to BD, the improvement over the conventional method is twofold. Firstly, the Heun step (9) can be used as a better propagation method as already analyzed by Fixman [46] and Iniesta and de la Torre [47]. Several further higher-order schemes, mostly of Runge-Kutta type, have been used in BD simulations [45, 48]. These methods often lead to increased accuracy and even efficiency due to bigger timesteps becoming achievable, which outweighs the increased computational cost per step. Since the prefactor of the random force is trivial (i.e. constant) in the overdamped Langevin equation (1), higher strong-order schemes are easily constructable. This situation stands in contrast to the more complicated SDEs of type (4) with general noise term g(X(t),t)g(X(t),t).

Secondly, with two approximations of different order at hand, assessing their discrepancy allows to obtain an estimate of the discretisation error in each step, which is a fundamental prerequisite in the construction of adaptive timestepping algorithms. For this, we exploit the additive structure of the noise term in eq. (1) again and recognize that such an error estimate can be obtained by only comparing the deterministic parts of eqs. (8) and (9). Nevertheless, note that the random displacements are already contained in 𝐫¯k+1N\bar{\mathbf{r}}^{N}_{k+1}. This makes the deterministic part of eq. (9) implicitly dependent on the realization of 𝐑kN\mathbf{R}^{N}_{k}, which is opposed to Ref. 41 where an error estimate is defined without involving the random increments at all.

At a given step kk+1k\rightarrow k+1, one can construct the automatic choice of an appropriate Δtk\Delta t_{k}. For this, the error of a trial step with length Δt\Delta t is evaluated.

We define a particle-wise error

E(i)=Δ𝐫¯(i)Δ𝐫(i)=Δt2γ(i)𝐅(i)(𝐫¯k+1N)𝐅(i)(𝐫kN)E^{(i)}=\lVert\Delta\bar{\mathbf{r}}^{(i)}-\Delta\mathbf{r}^{(i)}\rVert=\frac{\Delta t}{2\gamma^{(i)}}\lVert\mathbf{F}^{(i)}(\bar{\mathbf{r}}^{N}_{k+1})-\mathbf{F}^{(i)}(\mathbf{r}^{N}_{k})\rVert (10)

with Δ𝐫¯(i)=𝐫¯k+1(i)𝐫k(i)\Delta\bar{\mathbf{r}}^{(i)}=\bar{\mathbf{r}}^{(i)}_{k+1}-\mathbf{r}^{(i)}_{k} and Δ𝐫(i)=𝐫k+1(i)𝐫k(i)\Delta\mathbf{r}^{(i)}=\mathbf{r}^{(i)}_{k+1}-\mathbf{r}^{(i)}_{k}. For each particle ii, this error is compared to a tolerance

τ(i)=ϵabs+ϵrelΔ𝐫(i)\tau^{(i)}=\epsilon_{\mathrm{abs}}+\epsilon_{\mathrm{rel}}\lVert\Delta\mathbf{r}^{(i)}\rVert (11)

consisting of an absolute and a relative part with user-defined coefficients ϵabs\epsilon_{\mathrm{abs}} and ϵrel\epsilon_{\mathrm{rel}}. Note that Δ𝐫(i)\Delta\mathbf{r}^{(i)} is used in eq. (11), since it captures the true particle displacement more accurately than Δ𝐫¯(i)\Delta\bar{\mathbf{r}}^{(i)} due to its higher order. Additionally, we stress that Δ𝐫(i)\Delta\mathbf{r}^{(i)} decreases on average for shorter timesteps, such that τ(i)\tau^{(i)} indirectly depends on the trial Δt\Delta t, which limits the accumulation of errors after multiple small steps.

Then a total error estimate

=(E(i)τ(i))1iN\mathcal{E}=\left\lVert\left(\frac{E^{(i)}}{\tau^{(i)}}\right)_{1\leq i\leq N}\right\rVert (12)

can be calculated for the trial step. While the Euclidean norm is the canonical choice in eq. (10), there is a certain freedom of choosing an appropriate norm \lVert\cdot\rVert in eq. (12). The 22- or \infty-norm defined respectively as

(x(i))1iN2\displaystyle\left\lVert(x^{(i)})_{1\leq i\leq N}\right\rVert_{2} =1Ni=1N|x(i)|2,\displaystyle=\sqrt{\frac{1}{N}\sum_{i=1}^{N}|x^{(i)}|^{2}}, (13)
(x(i))1iN\displaystyle\left\lVert(x^{(i)})_{1\leq i\leq N}\right\rVert_{\infty} =max1iN|x(i)|,\displaystyle=\max_{1\leq i\leq N}|x^{(i)}|, (14)

may both come up as natural and valid options (note that we normalise the standard 2-norm by N\sqrt{N} to obtain an intensive quantity). However, in eq. (12), where a reduction from particle-wise errors to a global scalar error takes place, this has crucial implications to the kind of error that is being controlled. If the 22-norm is used, then ϵabs\epsilon_{\mathrm{abs}} and ϵrel\epsilon_{\mathrm{rel}} set the mean absolute and relative tolerance for all particles. In practice for large particle number NN, this can lead to substantial single-particle errors becoming lost in the global average. Therefore, it is advisable to use the \infty-norm for the reduction in eq. (12) to be able to set a maximum single-particle absolute and relative tolerance, i.e. if <1\mathcal{E}<1, E(i)<τ(i)E^{(i)}<\tau^{(i)} for all i=1,,Ni=1,\dots,N.

Following the design of adaptive ODE solvers and ignoring stochasticity, an expansion of an embedded pair of methods with orders pp and p1p-1 shows that an error estimate of type (12) is of order pp. Thus, a timestep of length qΔtq\Delta t with

q=1pq=\mathcal{E}^{-\frac{1}{p}} (15)

could have been chosen to marginally satisfy the tolerance requirement <1\mathcal{E}<1.

Considering the recommendation of Ref. 24 which discusses the application of such a timestep scaling factor to embedded methods for SDEs, we set

q=(1α)2.q=\left(\frac{1}{\alpha\mathcal{E}}\right)^{2}. (16)

Here both a more conservative exponent is chosen than in eq. (15) and also a safety factor α=2\alpha=2 is introduced, as we want to account for stochasticity and for the low order of our integrators, which results in a low order of the error estimate (12).

With the choice (16), one can distinguish two possible scenarios in each trial step:

  • q1q\geq 1: Accept the trial step, i.e. set Δtk=Δt\Delta t_{k}=\Delta t and advance the particle positions with eq. (9), and then continue with k+1k+2k+1\rightarrow k+2.

  • q<1q<1: Reject the trial step and retry kk+1k\rightarrow k+1 with a smaller timestep.

In both cases, the timestep is adapted afterwards via ΔtqΔt\Delta t\leftarrow q\Delta t. Here and in the following, the notation aba\leftarrow b denotes an assignment of the value bb to the variable aa.

It is advisable to restrict the permissible range of values for qq by defining lower and upper bounds qminqqmaxq_{\mathrm{min}}\leq q\leq q_{\mathrm{max}} such that the adaptation of Δt\Delta t is done with

qmin(qmax,max(qmin,q))q\leftarrow\min(q_{\mathrm{max}},\max(q_{\mathrm{min}},q)) (17)

While commonly chosen as qmax10q_{\mathrm{max}}\approx 10 and qmin0.2q_{\mathrm{min}}\approx 0.2 for ODE solvers, due to stochasticity and the possibility of drawing “difficult” random increments, qminq_{\mathrm{min}} should certainly be decreased in the case of SDEs to avoid multiple re-rejections. Vice versa, a conservative choice of qmaxq_{\mathrm{max}} prevents an overcorrection of the timestep in case of “fortunate” random events. We set qmax=1.2q_{\mathrm{max}}=1.2 and qmin=0.001q_{\mathrm{min}}=0.001 in practical applications to achieve a rapid adaptation in case of rejected trial steps and a careful approach to larger timesteps after accepted moves.

One can also impose limits for the range of values of Δt\Delta t, such as a maximum bound ΔtmaxΔt\Delta t_{\mathrm{max}}\geq\Delta t. Restriction by a minimum value ΔtminΔt\Delta t_{\mathrm{min}}\leq\Delta t, however, could lead to a forced continuation of the simulation with an actual local error that lies above the user-defined tolerance. Thus, this is not recommended. In our test cases described below, we see no need to restrict the timestep as the adaptive algorithm does not show unstable behavior without such a restriction.

Most concepts of this section can be generalized in a straightforward manner to non-overdamped Langevin dynamics, where particle inertia is explicitly considered in the stochastic equations of motion. However, the embedded Heun-Euler method (8) and (9) might not be a suitable integration scheme in this case. We therefore outline the modifications that are necessary in the construction of an adaptive algorithm for general Langevin dynamics in Appendix A.

III.2 Rejection Sampling with Memory in BD

Figure 1: The trial step is rejected (a) and a retrial with a smaller value of Δt\Delta t is performed (b) if the discrepancy between eq. (8) and eq. (9) is large. To preserve the properties of the Brownian motion, the Brownian bridge (BB) theorem (7) is used to interpolate the random process at the intermediate timepoint qΔtq\Delta t. The difference between the bridged random sample and the rejected original random sample is stored along with the remaining time difference onto the stack SfS_{f}. This is indicated in (b), where SfS_{f} now contains one element that holds the residual time interval (horizontal segment) and random increment (vertical arrow). Thus, in future steps, the Brownian path can be reconstructed from elements on SfS_{f} and the properties of the Wiener process remain intact. Note that for correctness in case of re-rejections, one requires a second stack SuS_{u} as explained below and in fig. 4.
Figure 2: After an accepted step has been performed, a new random increment is prepared for the next trial step of length Δt\Delta t. If available, future information – stemming from previously rejected trial steps – has to be incorporated in the generation of new random vectors in order to retain the properties of Brownian motion. In the shown case, the future information stack contains one element, which is popped and accumulated to the new random increment and time interval. The stack is now empty and a difference Δtgap\Delta t_{\mathrm{gap}} to the goal timestep Δt\Delta t remains. For this gap, a new uncorrelated random increment has to be generated from 𝒩(0,Δtgap)\mathcal{N}(0,\Delta t_{\mathrm{gap}}) to complete the preparation of the next trial step.
Figure 3: The situation is similar to fig. 2, where a step is rejected (a) and then retried (b). Here, the future information reaches further than the goal timestep Δt\Delta t. In this case, the Brownian bridge (BB) has to be applied for the interpolation of an element from the future information stack. Generally, one pops the elements of the future stack SfS_{f} one after another and accumulates the random increments and time intervals until the element which crosses Δt\Delta t is reached, to which the bridging theorem is then applied. In the shown example, the interpolation is done with the first element of SfS_{f}, since it already surpasses Δt\Delta t. Similar to fig. 1, the remainder of the bridged increment and time interval is pushed back onto SfS_{f}.
Figure 4: To keep track of the random increments that are used in the construction of 𝐑\mathbf{R}, a second stack SuS_{u} is introduced. In (a), the situation of fig. 2 is shown as an example, where 𝐑\mathbf{R} consists of an element of SfS_{f} (stemming from a previous rejection) and a Gaussian contribution 𝐑gap𝒩(0,Δtgap)\mathbf{R}_{\mathrm{gap}}\sim\mathcal{N}(0,\Delta t_{\mathrm{gap}}). In case of rejection, the elements that were popped from SfS_{f} as well as newly drawn increments such as 𝐑gap\mathbf{R}_{\mathrm{gap}} would be lost unrecoverably. By using SuS_{u} as an intermediate storage, all contributions to 𝐑\mathbf{R} can be transferred back to SfS_{f} such that no information about drawn random increments is lost and the same Brownian path is still followed. These considerations apply as well when random vectors are drawn via the Brownian bridge theorem, as e.g. in figs. 1 and 3.

We still have to prescribe the generation of the random vectors 𝐑kN\mathbf{R}^{N}_{k} that appear both in eqs. (8) and (9). For clarity, we consider a single trial step and denote the set of corresponding random increments with 𝐑\mathbf{R} (the sub- and superscript is dropped). Obviously, 𝐑\mathbf{R} can no longer be chosen independently in general but has to incorporate previously rejected timesteps as long as they are relevant, i.e. as long as their original time interval has not passed yet. For this, we apply the Rejection Sampling with Memory (RSwM) algorithm to BD. Rackauckas and Nie [24] describe three variants of RSwM which they refer to as RSwM1, RSwM2, and RSwM3, and which differ in generality of the possible timesteps and algorithmic complexity. We aim to reproduce RSwM3 because it enables optimal timestepping and still ensures correctness in special but rare cases such as re-rejections.

Common to all of the RSwM algorithms is storing parts of rejected random increments onto a stack SfS_{f}. We refer to elements of this stack as future information, since they have to be considered in the construction of future steps. This becomes relevant as soon as a step is rejected due to a too large error. Then a retrial is performed by decreasing Δt\Delta t and drawing bridging random vectors via eq. (7). The difference between rejected and bridging random vectors must not be forgotten but rather be stored on the future information stack, cf. fig. 1.

On the other hand, if a step is accepted, new random vectors have to be prepared for the next time interval Δt\Delta t. If no future information is on the stack, this can be done conventionally via drawing Gaussian distributed random vectors according to 𝐑𝒩(0,Δt)\mathbf{R}\sim\mathcal{N}(0,\Delta t). If the stack SfS_{f} is not empty and thus future information is available, then elements of the stack are popped one after another, i.e. they are taken successively from the top. The random vectors as well as the time interval of each popped element are accumulated in temporary variables which then hold the sum of those respective time intervals and random vectors. The stack could be empty before the accumulated time reaches Δt\Delta t, i.e. there could still be a difference Δtgap\Delta t_{\mathrm{gap}}. In this case, one draws new random vectors 𝐑gap𝒩(0,Δtgap)\mathbf{R}_{\mathrm{gap}}\sim\mathcal{N}(0,\Delta t_{\mathrm{gap}}) to compensate for the difference and adds them to the accumulated ones, cf. fig. 2, before attempting the trial step. Otherwise, if the future information reaches further than Δt\Delta t, then there is one element that passes Δt\Delta t. One takes this element, splits it in “before Δt\Delta t” and “after Δt\Delta t” and draws bridging random vectors for “before Δt\Delta t” according to eq. (7) which are again added to the accumulated ones. The rest of this element (“after Δt\Delta t”) can be pushed back to the future information stack and the step Δt\Delta t can be tried with the accumulated vectors set as 𝐑\mathbf{R} in eqs. (8) and (9), cf. fig. 3.

At this stage, we have constructed RSwM2 for BD, which is not capable of handling all edge cases yet as pointed out in Ref. 24. If future information is popped from the stack to prepare 𝐑\mathbf{R} for the next step, and this next step is then rejected, we have lost all popped information unrecoverably. To circumvent this, one adds a second stack SuS_{u} that stores information which is currently used for the construction of 𝐑\mathbf{R}. We refer to elements of this stack as information in use. If a step is rejected, the information in use can be moved back to the future information stack so that no elements are lost in multiple retries, cf. fig. 4. With this additional bookkeeping, correctness and generality is ensured in all cases and the RSwM3 algorithm is complete.

Notably, with the structuring of information into stacks where only the top element is directly accessible (“last in first out”), the chronological time order is automatically kept intact so that one only has to store time intervals and no absolute timepoints. Furthermore, searching or sorting of elements is prevented entirely which makes all operations 𝒪(1)\mathcal{O}(1) and leads to efficient implementations.

We point out that the original RSwM3 rejection branch as given in Ref. 24 was not entirely correct and draw a comparison to our rectifications in Appendix B, which have been brought to attention 111This issue has been discussed in https://github.com/SciML/DiffEqNoiseProcess.jl/pull/80. and have since been fixed in the reference implementation DifferentialEquations.jl [50]. Crucially, the correction not only applies to the case of BD, but it rather is relevant for the solution of general SDEs as well. A full pseudocode listing of one adaptive BD trial step utilizing RSwM3 is given in alg. 2 in Appendix C along with further explanation of technical details.222The corresponding implementation in C++ can be found in https://gitlab.uni-bayreuth.de/bt306964/mbd

III.3 Sampling of observables

Within BD, observables can be obtained from the sampling of configuration space functions. As an example, consider the one-body density profile ρ(𝐫)\rho(\mathbf{r}), which is defined as the average of the density operator ρ^(𝐫,𝐫N)=i=1Nδ(𝐫𝐫i)\hat{\rho}(\mathbf{r},\mathbf{r}^{N})=\sum_{i=1}^{N}\delta(\mathbf{r}-\mathbf{r}_{i}).

In simulations of equilibrium or steady states, one can use a time-average over a suitably long interval [0,T][0,T] to measure such quantities, i.e. for a general operator A^(X,𝐫N)\hat{A}(X,\mathbf{r}^{N}),

A(X)1T0TdtA^(X,𝐫N(t)).A(X)\approx\frac{1}{T}\int_{0}^{T}\mathop{}\!\mathrm{d}t\hat{A}(X,\mathbf{r}^{N}(t)). (18)

Note that the remaining dependence on XX can consist of arbitrary scalar or vectorial quantities or also be empty. For example, X=(𝐫,𝐫)X=(\mathbf{r},\mathbf{r}^{\prime}) or X=𝐫X=\mathbf{r} for general two- or one-body fields, X=rX=r for the isotropic radial distribution function or X=X=\emptyset for bulk quantities such as pressure or heat capacity.

Practically, A^(X,𝐫N)\hat{A}(X,\mathbf{r}^{N}) is evaluated in each step and an XX-resolved histogram is accumulated which yields A(X)A(X) after normalization. Considering the numerical discretisation of [0,T][0,T] into nn timesteps of constant length Δt\Delta t within a conventional BD simulation, eq. (18) is usually implemented as

A(X)1nk=1nA^(X,𝐫kN).A(X)\approx\frac{1}{n}\sum_{k=1}^{n}\hat{A}(X,\mathbf{r}_{k}^{N}). (19)

In adaptive BD with varying timestep length, one cannot use eq. (19) directly, since this would cause disproportionately many contributions to A(X)A(X) from small timesteps and would thus lead to a biased evaluation of any observable. Formally, the quadrature in eq. (18) now has to be evaluated numerically at non-equidistant integration points.

Therefore, if the time interval [0,T][0,T] is discretized into nn non-equidistant timepoints 0=t1<t2<<tn<tn+1=T0=t_{1}<t_{2}<\dots<t_{n}<t_{n+1}=T and Δtk=tk+1tk\Delta t_{k}=t_{k+1}-t_{k},

A(X)1Tk=1nΔtkA^(X,𝐫kN)A(X)\approx\frac{1}{T}\sum_{k=1}^{n}\Delta t_{k}\hat{A}(X,\mathbf{r}_{k}^{N}) (20)

constitutes a generalization of eq. (19) for this case that enables a straightforward sampling of observables within adaptive BD.

An alternative technique can be employed in scenarios where the state of the system shall be sampled on a sparser time grid than the one given by the integration timesteps. Then, regular sampling points can be defined that must be hit by the timestepping procedure (e.g. by artificially shortening the preceding step). On this regular time grid, eq. (19) can be used again. Especially in non-equilibrium situations and for the measurement of time-dependent correlation functions such as the van Hove two-body correlation function [52, 53, 54, 55, 56, 57], this method might be beneficial since quantities can still be evaluated at certain timepoints rather than having to construct a time-resolved histogram consisting of finite time intervals. Note however, that timestepping to predefined halting points is not yet considered in alg. 2.

IV Simulation results

To test and illustrate the adaptive BD algorithm, we investigate the truncated and shifted Lennard-Jones fluid with interaction potential

ΦLJTS(r)={ΦLJ(r)ΦLJ(rc),r<rc0,rrc\Phi_{\mathrm{LJTS}}(r)=\begin{cases}\Phi_{\mathrm{LJ}}(r)-\Phi_{\mathrm{LJ}}(r_{c}),&r<r_{c}\\ 0,&r\geq r_{c}\end{cases} (21)

where

ΦLJ(r)=4ϵ[(σr)12(σr)6]\Phi_{\mathrm{LJ}}(r)=4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right] (22)

and rr is the interparticle distance. We set a cutoff radius of rc=2.5σr_{c}=2.5\sigma throughout the next sections and use reduced Lennard-Jones units which yield the reduced timescale τ=σ2γ/ϵ\tau=\sigma^{2}\gamma/\epsilon.

A common problem in conventional BD simulations is the choice of an appropriate timestep. Obviously, a too small value of Δt\Delta t – while leading to accurate trajectories – has a strong performance impact, hindering runs which would reveal long time behavior and prohibiting extensive sampling periods, which are desirable from the viewpoint of the time average (20). Still, Δt\Delta t must be kept below a certain threshold above which results might be wrong or the simulation becomes unstable. Unfortunately, due to the absence of any intrinsic error estimates, judgement of a chosen Δt\Delta t is generally not straightforward. For instance, one can merely observe the stability of a single simulation run and accept Δt\Delta t if sensible output is produced and certain properties of the system (such as its energy) are well-behaved. Another possibility is the costly conduction of several simulation runs with differing timesteps, thereby cross-validating gathered results. Consequently, a true a priori choice of the timestep is not possible in general and test runs cannot always be avoided.

With adaptive timestepping, this problem is entirely prevented as one does not need to make a conscious choice for Δt\Delta t at all. Instead, the maximum local error of a step is restricted by the user-defined tolerance (11), ensuring correctness of results up to a controllable discretisation error. This does come at the moderate cost of overhead due to the additional operations per step necessary in the embedded Heun-Euler method and the RSwM algorithm. However, as we demonstrate in the following, the benefits of this method far outweigh the cost even in simple situations.

IV.1 Lennard-Jones bulk fluid in equilibrium

In the following, we compare results from conventional BD to those obtained with adaptive BD. We first consider a bulk system of size 7×7×6σ37\times 7\times 6\sigma^{3} with periodic boundary conditions at temperature kBT=0.8ϵk_{B}T=0.8\epsilon consisting of N=100N=100 Lennard-Jones particles initialized on a simple cubic lattice. In the process of equilibration, a gaseous and a liquid phase emerge, and the system therefore becomes inhomogeneous.

With non-adaptive Euler-Maruyama BD, a timestep Δtfix=104τ\Delta t_{\mathrm{fix}}=10^{-4}\tau is chosen to consistently converge to this state. This value is small enough to avoid severe problems which occur reproducibly for Δtfix5104τ\Delta t_{\mathrm{fix}}\gtrsim 5\cdot 10^{-4}\tau, where the simulation occasionally crashes or produces sudden jumps in energy due to faulty particle displacements.

In contrast, the timestepping of an adaptive BD simulation run is shown both as a timeseries and as a histogram in fig. 5. The tolerance coefficients in eq. (11) are thereby set to ϵabs=0.05σ\epsilon_{\mathrm{abs}}=0.05\sigma and ϵrel=0.05\epsilon_{\mathrm{rel}}=0.05 and the \infty-norm is used in the reduction from particle-wise to global error (12). One can see that large stepsizes up to Δt6104τ\Delta t\approx 6\cdot 10^{-4}\tau occur without the error exceeding the tolerance threshold. The majority of steps can be executed with a timestep larger than the value Δtfix=104τ\Delta t_{\mathrm{fix}}=10^{-4}\tau. On the other hand, the algorithm is able to detect moves that would cause large errors where it decreases Δt\Delta t appropriately. It is striking that in the shown sample, minimum timesteps as small as Δt=3106τ\Delta t=3\cdot 10^{-6}\tau occur. This is far below the stepsize of Δtfix=104τ\Delta t_{\mathrm{fix}}=10^{-4}\tau chosen in the fixed-timestep BD run above, which indicates that although the simulation is stable for this value, there are still steps which produce substatial local errors in the particle trajectories. For even larger values of Δtfix\Delta t_{\mathrm{fix}}, it is those unresolved collision events that cause unphysical particle displacements which then cascade and crash the simulation run. In comparison, the adaptive BD run yields a mean timestep of Δt¯=3104τ\overline{\Delta t}=3\cdot 10^{-4}\tau, which is larger than the heuristically chosen fixed timestep.

Refer to caption
Figure 5: The evolution of chosen timesteps for accepted moves is shown in (a). To accentuate the distribution of values of Δt\Delta t further, moving averages taken over the surrounding navgn_{\mathrm{avg}} points of a respective timestep record are depicted. One can see that the timestep Δt\Delta t varies rapidly in a broad range between Δt3106τ\Delta t\approx 3\cdot 10^{-6}\tau and Δt6104τ\Delta t\approx 6\cdot 10^{-4}\tau around a mean value of Δt2.8104τ\Delta t\approx 2.8\cdot 10^{-4}\tau. In the inset of (a), a close-up of the timestep behaviour is given, which reveals the rapid reduction of Δt\Delta t at jammed states and the quick recovery afterwards. In (b), the relative distribution of the data in (a) is illustrated. It is evident that the majority of steps can be executed with a large timestep, leading to increased performance of the BD simulation. On the other hand, in the rare event of a step which would produce large errors, the timestep is decreased appropriately to values far below those that would be chosen in a fixed-timestep BD run.
Performance and overhead

Per step, due to an additional evaluation via the Heun method (9), twice the computational effort is needed to calculate the deterministic forces compared to a single evaluation of the Euler-Maruyama step (8). This procedure alone has the benefit of increased accuracy though, and it hence makes larger timesteps feasible.

The computational overhead due to adaptivity with RSwM3 comes mainly from storing random increments on both stacks and applying this information in the construction of new random forces. Therefore, potential for further optimization lies in a cache-friendly organization of the stacks in memory as well as in circumventing superfluous access and copy instructions altogether. The latter considerations suggest that avoiding rejections and more importantly avoiding re-rejections is crucial for good performance and reasonable memory consumption of the algorithm. As already noted, in practice this can be accomplished with a small value of qminq_{\mathrm{min}} that allows for a rapid reduction of Δt\Delta t in the case of unfortunate random events and a conservative value of qmaxq_{\mathrm{max}} to avoid too large timestepping after moves with fortunate random increments. In our implementation, the cost of RSwM routines is estimated to lie below 10%10\% of the total runtime in common situations.

IV.2 Non-equilibrium – formation of colloidal films

While the benefits of adaptive BD are already significant in equilibrium, its real advantages over conventional BD become particularly clear in non-equilibrium situations. Due to the rich phenomenology – which still lacks a thorough understanding – and the important practical applications, the dynamics of colloidal suspensions near substrates and interfaces has been the center of attention in many recent works [25, 27, 30, 58, 28, 26, 29]. Nevertheless, the simulation of time-dependent interfacial processes is far from straightforward and especially for common BD, stability issues are expected with increasing packing fraction.

In the following, we apply adaptive BD to systems of Lennard-Jones particles and simulate evaporation of the implicit solvent. This is done by introducing an external potential that models the fixed substrate surface as well as a moving air-solvent interface. As in Ref. 29, we set

Vext(i)(z,t)=B(eκ(zσ(i)/2)+eκ(z+σ(i)/2L(t)))V^{(i)}_{\mathrm{ext}}(z,t)=B\left(\mathrm{e}^{-\kappa(z-\sigma^{(i)}/2)}+\mathrm{e}^{\kappa(z+\sigma^{(i)}/2-L(t))}\right) (23)

to only vary in the zz-direction and assume periodic boundary conditions in the remaining two directions. The value of κ\kappa controls the softness of the substrate and the air-solvent interface while BB sets their strength. We distinguish between the different particle sizes σ(i)\sigma^{(i)} to account for the offset of the particle centers where the external potential is evaluated. The position L(t)L(t) of the air-solvent interface is time-dependent and follows a linear motion L(t)=L0vtL(t)=L_{0}-vt with initial position L0L_{0} and constant velocity vv.

In the following, systems in a box which is elongated in zz-direction are considered. To ensure dominating non-equilibrium effects, values for the air-solvent interface velocities are chosen which yield large Péclet numbers Pe(i)=L0v/D(i)1\mathrm{Pe}^{(i)}=L_{0}v/D^{(i)}\gg 1 where D(i)=kBT/γ(i)D^{(i)}=k_{B}T/\gamma^{(i)} is the Einstein-Smoluchowski diffusion coefficient and γ(i)σ(i)\gamma^{(i)}\propto\sigma^{(i)} due to Stokes.

When attempting molecular simulations of such systems in a conventional approach, one is faced with a non-trivial choice of the timestep length Δtfix\Delta t_{\mathrm{fix}} since it has to be large enough to be efficient in the dilute phase but also small enough to capture the motion of the dense final state of the system. A cumbersome solution would be a subdivision of the simulation into subsequent time intervals, thereby choosing timesteps that suit the density of the current state. This method is beset by problems as the density profile becomes inhomogeneous and is not known a priori.

We show that by employing the adaptive BD method of Sec. III, these issues become non-existent. Concerning the physical results of the simulation runs, the automatically chosen timestep is indeed closely connected to the increasing packing fraction as well as to the structural properties of the respective colloidal system, as will be discussed below. Similar test runs as the ones shown in the following but carried out with constant timestepping and a naive choice of Δtfix\Delta t_{\mathrm{fix}} frequently lead to instabilities in the high-density regimes, ultimately resulting in unphysical trajectories or crashes of the program. Due to the possiblity of stable and accurate simulations of closely packed phases with adaptive BD, we focus on the investigation of the final conformation of the colloidal suspension.

IV.2.1 Single species, moderate driving

Firstly, a single species Lennard-Jones system is studied and the box size is chosen as 8×8×50σ38\times 8\times 50\sigma^{3}. The Lennard-Jones particles are initialized on a simple cubic lattice with lattice constant 2σ2\sigma and the velocity of the air-solvent interface is set to v=1σ/τv=1\sigma/\tau. We set ϵabs=0.01σ\epsilon_{\mathrm{abs}}=0.01\sigma to accomodate for smaller particle displacements in the dense phase and relax the relative tolerance to ϵrel=0.1\epsilon_{\mathrm{rel}}=0.1.

As one can see in fig. 6, the timestep is automatically adjusted as a reaction to the increasing density. In the course of the simulation run, the average number density increases from approximately 0.07σ30.07\sigma^{-3} to 1.4σ31.4\sigma^{-3}, although the particles first accumulate near the air-solvent interface. Astonishingly, even the freezing transition at the end of the simulation run can be captured effortlessly. This illustrates the influence of collective order on the chosen timestep. With rising density of the Lennard-Jones fluid, the timestep decreases on average due to the shorter mean free path of the particles and more frequent collisions. At this stage, Δt\Delta t varies significantly and very small timesteps maneuver the system safely through jammed states of the disordered fluid. In the process of crystallization, the timestep decreases rapidly to accomodate for the reduced free path of the particles before it shortly relaxes to a plateau when crystal order is achieved. Additionally, the variance of Δt\Delta t decreases and, contrary to the behaviour in the liquid phase, fewer jammed states can be observed. This is due to the crystal order of the solid phase, which prevents frequent close encounters of particles and hence alleviates the need for a rapid reduction of Δt\Delta t.

Refer to caption
Figure 6: The timestepping evolution in a system with moving interface modeled via eq. (23) with parameters B=7000ϵB=7000\epsilon, κ=1/σ\kappa=1/\sigma (adopted from Ref. 29), and v=1σ/τv=1\sigma/\tau is shown. We depict both the actual timeseries whose envelope visualizes the maximum and minimum values of Δt\Delta t as well as a moving average over the surrounding navgn_{\mathrm{avg}} points to uncover the mean chosen timestep. As the density increases and the propagation of the overdamped Langevin equation becomes more difficult, the timestep is systematically decreased. This is an automatic process that needs no user input and that can even handle the freezing transition which occurs at the end of the simulation. To illustrate this process, typical snapshots of the system are given at different timepoints which are marked in the timestepping plot as A, B, and C and correspond to a dilute, a prefreezing and a crystallized state. Especially in the transition from B to C, frequent particle collisions occur which are resolved carefully by the adaptive BD method.

IV.2.2 Single species, strong driving

If the evaporation rate is increased via a faster moving air-solvent interface, the final structure of the colloidal suspension is altered. Particularly, with rising air-solvent velocity vv, a perfect crystallization process is hindered and defects in the crystal structure occur. This is reflected in the timestepping evolution, as jammed states still happen frequently in the dense regime due to misaligned particles. Therefore, unlike in the previous case of no defects, sudden jumps to very small timesteps can still be observed as depicted in fig. 7.

If the velocity of the air-solvent interface is increased even further, amorphous states can be reached, where no crystal order prevails. Nevertheless, our method is still able to resolve the particle trajectories of those quenched particle configurations so that the simulation remains both stable and accurate even in such demanding circumstances.

Refer to caption
Figure 7: Time evolution and moving average of the timestep Δt\Delta t for the air-solvent interface velocity being increased to v=50σ/τv=50\sigma/\tau in a larger system of box size 10×10×100σ310\times 10\times 100\sigma^{3}. In this case, defects are induced during the crystallization process which prevent a perfect crystal order of the final particle configuration. Jammed states still occur frequently, and the timestep has to accomodate rapidly to resolve particle displacements correctly. This is depicted in the inset, which shows that sudden jumps to small values of Δt\Delta t still occur in the high-density regime, indicating error-prone force evaluations due to prevailing defects in the crystal structure.

IV.2.3 Binary mixture

Returning to stratification phenomena, we consider mixtures of particles differing in size. Depending on the Péclet numbers of the big (subscript bb) and small (subscript ss) particle species and their absolute value (i.e. if Pe1\mathrm{Pe}\ll 1 or Pe1\mathrm{Pe}\gg 1), different structures of the final phase can emerge, ranging from “small-on-top” or “big-on-top” layering to more complicated conformations [29]. For large Péclet numbers 1Pes<Peb1\ll\mathrm{Pe}_{s}<\mathrm{Pe}_{b}, studies of Fortini et al. [58] have shown the formation of a “small-on-top” structure, i.e. the accumulation of the small particle species near the moving interface. Additionally, in the immediate vicinity of the air-solvent boundary, a thin layer of big particles remains trapped due to their low mobility.

In the following, a binary mixture of Lennard-Jones particles with diameters σb=σ\sigma_{b}=\sigma and σs=0.5σ\sigma_{s}=0.5\sigma is simulated in a system of size 10×10×100σ310\times 10\times 100\sigma^{3} and the velocity of the air-solvent interface is set to v=1σ/τv=1\sigma/\tau as before. We initialize Nb=768N_{b}=768 big and Ns=4145N_{s}=4145 small particles uniformly in the simulation domain and particularly focus in our analysis on the structure of the final dense phase.

As the simulation advances in time, the observations of Fortini et al. [58] can be verified. A thin layer of big particles at the air-solvent interface followed by a broad accumulation of small particles emerges. The timestepping evolution shows similar behaviour to the single species case shown in fig. 6. On average, the value of Δt\Delta t decreases and throughout the simulation, jumps to low values occur repeatedly when interparticle collisions have to be resolved accurately.

As the system approaches the dense regime, the finalizing particle distribution of the dried colloidal suspension can be investigated. One can see that a thin layer of big particles develops in close proximity to the substrate, similar to the one forming throughout the simulation at the air-solvent interface. This process can again be explained by the lower mobility of the big particles compared to the small ones, which prevents a uniform diffusion away from the substrate.

Moreover, as the packing fraction increases further, the structure of the interfacial layers of the trapped big species changes. While only a single peak is visible at first, a second peak develops in the last stages of the evaporation process. This phenomenon occurs both at the substrate as well as at the air-solvent interface, although its appearance happens earlier and more pronounced at the former.

Even for this simple model mixture of colloidal particles which differ only in diameter, the final conformation after evaporation of the implicit solvent possesses an intricate structure. Both at the substrate as well as at the top of the film, a primary and secondary layer of the big particle species build up. Those layers enclose a broad accumulation of the small particle species, which is by no means uniform but rather develops a concentration gradient in the positive zz-direction, outlined by peaks of the respective density profile close to the big particle layers. The formation of the described final state is illustrated in fig. 8, where the density profiles of both particle species are shown for two timepoints at the end of the simulation run.

Refer to caption
Figure 8: The density profiles ρb(z)\rho_{b}(z) and ρs(z)\rho_{s}(z) of the big and small particle species in a stratifying colloidal suspension of a binary Lennard-Jones mixture with particle diameters σb=σ\sigma_{b}=\sigma and σs=0.5σ\sigma_{s}=0.5\sigma is shown at two timepoints of the simulation. In (a), single layers of the big species have already emerged near the substrate and the air-solvent interface, which enclose the dominating small particles in the middle of the box. At a later time (b) when the air-solvent interface has moved further towards the substrate and the packing fraction has hence increased, a second layer of the big particles forms at both interfaces and the final concentration gradient of the small species manifests within the dried film. Crucially, the intricate details of the final conformation demand an accurate numerical treatment of the dynamics of the closely packed colloidal suspension, to which adaptive BD offers a feasible solution.

V Conclusion

In this work, we have constructed a novel method for BD simulations by employing recently developed algorithms for the adaptive numerical solution of SDEs to the case of Brownian motion as described on the level of the overdamped Langevin equation (1). For the evaluation of a local error estimate in each trial step, we have complemented the simple Euler-Maruyama scheme (8) found in common BD with a higher-order Heun step (9). By comparison of their discrepancy with a user-defined tolerance (11) composed of an absolute and a relative contribution, we were able to impose a criterion (16) for the acceptance or rejection of the trial step and for the adaptation of Δt\Delta t. Special care was thereby required in the reduction from particle-wise errors (10) to a global scalar error estimate (12).

Due to the stochastic nature of Brownian motion, the rejection and subsequent retrial of a timestep could not be done naively by redrawing uncorrelated random vectors in this case. Instead, a sophisticated method had to be employed to ensure the validity of the statistical properties of the random process in eq. (1) and hence to avoid biased random forces from a physical point of view. We have illustrated that the Brownian bridge theorem (7) resolves this problem formally, and that RSwM [24] provides a feasible implementation method based on this theorem. Hence, we specialized RSwM to the case of Brownian motion in Sec. III.2 and constructed the fully adaptive BD scheme outlined in alg. 2. A correction to the original algorithm of Ref. 24 is given in Appendix B and a generalization of adaptive BD to non-overdamped Langevin dynamics is outlined in Appendix A.

To test our framework, we applied the adaptive BD method to both equilibrium and non-equilibrium Lennard-Jones systems focusing on the analysis of individual trajectories. Even in the standard case of a phase-separating bulk fluid, we could verify that the use of RSwM induced no significant computational overhead and that a performance gain could be achieved compared to the fixed-timestep Euler-Maruyama method. This is complemented by practical convenience, since Δt\Delta t needs not be chosen a priori.

The real advantages of adaptive BD become clear in more demanding situations where a fixed timestep would ultimately lead to inaccuracy and instability of the simulation without manual intervention. We have shown this with non-equilibrium systems of drying colloidal suspensions and have modeled the rapid evaporation of the implicit solvent by a moving interface as in Ref. 29. With our method, we achieved efficient timestepping and unconditional stability of the simulation even in the final stages where the packing fraction increases rapidly and the final structural configuration of the dried film develops. Particularly, in a single-species Lennard-Jones system, the freezing transition could be captured effortlessly even if crystal defects remained as a result of strong external driving. In the more elaborate case of a binary mixture of different-size particles, the intricate structure of the colloidal film could be resolved accurately. Here, we reported the development of a dual layer of the big particle species both at the substrate as well as at the top interface, while a concentration gradient of the small particle species could be observed in-between. This shows that the method can be used to predict the arrangement of stratified films efficiently and with great detail, which could be helpful in the determination of their macroscopic properties.

We expect similar benefits in all sorts of situations where fundamental changes within the system call for more accurate or more relaxed numerical timestepping. For instance, this could be the case in sheared systems [59, 60], for sedimenting [61, 62] or cluster-forming [63] colloidal suspensions, in microrheological simulations [64], for phase transitions and especially the glass transition [65], in the formation of crystals and quasicrystals [66, 67], in active matter [68, 69, 70, 71], possibly including active crystallization [72, 73], as well as for responsive colloids [74]. In any respect, adaptive BD can help in a more accurate measurement of observables, since the evaluation of the average (20) is not hampered by erroneous particle configurations, which might occur in conventional BD. Especially for quantities with a one-sided bias such as absolute or square values of particle forces and velocities, the abundance of outliers in the set of samples used in eq. (20) is essential to yield valid results.

We finally illustrate possible improvements of the adaptive BD method itself, where several aspects come to mind. Firstly, the choices of the parameters α\alpha in eq. (16) as well as qminq_{\mathrm{min}} and qmaxq_{\mathrm{max}} in eq. (17) were made mostly heuristically. We note that in Ref. 24, the influence of those parameters on the performance of the algorithm has been investigated, but emphasize that results could vary for our high-dimensional setting of Brownian motion.

Secondly, recently developed adaptation methods for the stepsize which employ control theory could be helpful. Thereby, Δt\Delta t is not merely scaled after each trial step by a locally determined factor qq as evaluated in eq. (16). Instead, one constructs a proportional-integral-derivative (PID) controller for the selection of new timesteps. This means that the adaptation is now not only influenced proportionally by the error estimate of the current step, but rather it is also determined by integral and differential contributions, i.e. the memory of previous stepsizes and the local change of the stepsize. Adaptive timestepping with control theory has already been applied to ODEs [75, 76, 77] and SDEs [40]. This method could help in our case to reduce the number of unneccessarily small steps even further. In our present implementation, a conservatively chosen qmaxq_{\mathrm{max}} in eq. (17) keeps the number of rejections low because it restricts the growth of Δt\Delta t after moves with coincidentally low error. But it also has the effect of preventing a fast relaxation of Δt\Delta t after a sudden drop, e.g. due to an unfortunate random event. Then, many steps are needed for Δt\Delta t to grow back to its nominal value, because the maximum gain in each step is limited by qmaxq_{\mathrm{max}}. With a control theory approach, a mechanism could be established which permits sudden drops to low values of Δt\Delta t but still ensures a rapid relaxation afterwards.

Finally, the adaptive BD method could be augmented to include hydrodynamic interactions. This requires a careful treatment of the random forces, as they now incorporate the particle configuration 𝐫N\mathbf{r}^{N}. Therefore, the noise is no longer additive, which complicates the numerical scheme necessary for a correct discretisation of the overdamped Langevin equation. Still, instead of eqs. (8) and (9), existing methods for SDEs with non-additive noise could be employed within the presented framework to yield an adaptive timestepping procedure for BD with hydrodynamic interactions.

In summary, to capture the dynamics of systems that undergo structural changes such as the ones shown above, and to gain further understanding of the implications regarding e.g. the resulting macroscopic properties, sophisticated simulation methods are needed. Adaptive BD provides the means to treat these problems with great accuracy while still being computationally feasible and robust, which is a crucial trait from a practical standpoint.

Acknowledgements.
We thank Christopher Rackauckas, Tobias Eckert and Daniel de las Heras for useful comments. This work is supported by the German Research Foundation (DFG) via project number 436306241.

Data availability statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  • Frenkel and Smit [2002] D. Frenkel and B. Smit, Understanding Molecular Simulation: From Algorithms to Applications, 2nd ed., Computational Science Series No. 1 (Academic Press, San Diego, 2002).
  • Hansen and McDonald [2013] J.-P. Hansen and I. R. McDonald, Theory of Simple Liquids: With Applications of Soft Matter, 4th ed. (Elsevier/AP, Amstersdam, 2013).
  • Allen and Tildesley [2009] M. P. Allen and D. J. Tildesley, Computer Simulation of Liquids, reprinted ed., Oxford Science Publications (Clarendon Press, Oxford, 2009).
  • Karplus and McCammon [2002] M. Karplus and J. A. McCammon, “Molecular dynamics simulations of biomolecules,” Nat. Struct Biol. 9, 646–652 (2002).
  • Squires and Brady [2005] T. M. Squires and J. F. Brady, “A simple paradigm for active and nonlinear microrheology,” Phys. Fluids 17, 073101 (2005).
  • Reinhardt, Scacchi, and Brader [2014] J. Reinhardt, A. Scacchi, and J. M. Brader, “Microrheology close to an equilibrium phase transition,” J. Chem. Phys. 140, 144901 (2014).
  • Puertas and Voigtmann [2014] A. M. Puertas and T. Voigtmann, “Microrheology of colloidal systems,” J. Phys. Condens. Matter 26, 243101 (2014).
  • Geigenfeind, delas Heras, and Schmidt [2020] T. Geigenfeind, D. delas Heras, and M. Schmidt, “Superadiabatic demixing in nonequilibrium colloids,” Commun. Phys. 3, 23 (2020).
  • Gillespie [1976] D. T. Gillespie, “A general method for numerically simulating the stochastic time evolution of coupled chemical reactions,” J. Comput. Phys. 22, 403–434 (1976).
  • Andersen [1980] H. C. Andersen, “Molecular dynamics simulations at constant pressure and/or temperature,” J. Chem. Phys. 72, 2384–2393 (1980).
  • Nosé [1984] S. Nosé, “A molecular dynamics method for simulations in the canonical ensemble,” Mol. Phys. 52, 255–268 (1984).
  • Alder and Wainwright [1959] B. J. Alder and T. E. Wainwright, “Studies in Molecular Dynamics. I. General Method,” J. Chem. Phys. 31, 459–466 (1959).
  • Scala, Voigtmann, and De Michele [2007] A. Scala, T. Voigtmann, and C. De Michele, “Event-driven Brownian dynamics for hard spheres,” J. Chem. Phys. 126, 134109 (2007).
  • Colberg and Höfling [2011] P. H. Colberg and F. Höfling, “Highly accelerated simulations of glassy dynamics using GPUs: Caveats on limited floating-point precision,” Comput. Phys. Commun. 182, 1120–1129 (2011).
  • Wang, Tay, and Swendsen [1999] J.-S. Wang, T. K. Tay, and R. H. Swendsen, “Transition Matrix Monte Carlo Reweighting and Dynamics,” Phys. Rev. Lett. 82, 476–479 (1999).
  • Landau, Tsai, and Exler [2004] D. P. Landau, S.-H. Tsai, and M. Exler, “A new approach to Monte Carlo simulations in statistical physics: Wang-Landau sampling,” Am. J. Phys. 72, 1294–1302 (2004).
  • de las Heras and Schmidt [2018] D. de las Heras and M. Schmidt, “Better Than Counting: Density Profiles from Force Sampling,” Phys. Rev. Lett. 120, 218001 (2018).
  • Rotenberg [2020] B. Rotenberg, “Use the force! Reduced variance estimators for densities, radial distribution functions, and local mobilities in molecular simulations,” J. Chem. Phys. 153, 150902 (2020).
  • Coles et al. [2021] S. W. Coles, E. Mangaud, D. Frenkel, and B. Rotenberg, “Reduced variance analysis of molecular dynamics simulations by linear combination of estimators,” J. Chem. Phys. 154, 191101 (2021).
  • Fritsch et al. [2012] S. Fritsch, S. Poblete, C. Junghans, G. Ciccotti, L. Delle Site, and K. Kremer, “Adaptive resolution molecular dynamics simulation through coupling to an internal particle reservoir,” Phys. Rev. Lett. 108, 170602 (2012).
  • Rossky, Doll, and Friedman [1978] P. J. Rossky, J. D. Doll, and H. L. Friedman, “Brownian dynamics as smart Monte Carlo simulation,” J. Chem. Phys. 69, 4628–4633 (1978).
  • Jardat et al. [1999] M. Jardat, O. Bernard, P. Turq, and G. R. Kneller, “Transport coefficients of electrolyte solutions from Smart Brownian dynamics simulations,” J. Chem. Phys. 110, 7993–7999 (1999).
  • Kloeden and Platen [1992] P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, 1st ed. (Springer-Verlag Berlin Heidelberg, 1992).
  • Rackauckas and Nie [2017a] C. Rackauckas and Q. Nie, “Adaptive methods for stochastic differential equations via natural embeddings and rejection sampling with memory,” Discrete Contin. Dyn. Syst. - B 22, 2731–2761 (2017a).
  • van der Kooij and Sprakel [2015] H. M. van der Kooij and J. Sprakel, “Watching paint dry; more exciting than it seems,” Soft Matter 11, 6353–6359 (2015).
  • Schulz and Keddie [2018] M. Schulz and J. L. Keddie, “A critical and quantitative review of the stratification of particles during the drying of colloidal films,” Soft Matter 14, 6181–6197 (2018).
  • Makepeace et al. [2017] D. K. Makepeace, A. Fortini, A. Markov, P. Locatelli, C. Lindsay, S. Moorhouse, R. Lind, R. P. Sear, and J. L. Keddie, “Stratification in binary colloidal polymer films: Experiment and simulations,” Soft Matter 13, 6969–6980 (2017).
  • Trueman et al. [2012] R. E. Trueman, E. Lago Domingues, S. N. Emmett, M. W. Murray, J. L. Keddie, and A. F. Routh, “Autostratification in Drying Colloidal Dispersions: Experimental Investigations,” Langmuir 28, 3420–3428 (2012).
  • He et al. [2021] B. He, I. Martin-Fabiani, R. Roth, G. I. Tóth, and A. J. Archer, “Dynamical Density Functional Theory for the Drying and Stratification of Binary Colloidal Dispersions,” Langmuir 37, 1399–1409 (2021).
  • Fortini and Sear [2017] A. Fortini and R. P. Sear, “Stratification and Size Segregation of Ternary and Polydisperse Colloidal Suspensions during Drying,” Langmuir 33, 4796–4805 (2017).
  • Burrage and Burrage [1996] K. Burrage and P. Burrage, “High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations,” Appl. Numer. Math. 22, 81–101 (1996).
  • Rößler [2010] A. Rößler, “Runge–Kutta Methods for the Strong Approximation of Solutions of Stochastic Differential Equations,” SIAM J. Numer. Anal. 48, 922–952 (2010).
  • Fehlberg [1970] E. Fehlberg, “Klassische Runge-Kutta-Formeln vierter und niedrigerer Ordnung mit Schrittweiten-Kontrolle und ihre Anwendung auf Wärmeleitungsprobleme,” Computing 6, 61–71 (1970).
  • Rößler [2003] A. Rößler, “Embedded Stochastic Runge-Kutta Methods,” PAMM 2, 461–462 (2003).
  • Ibe [2013] O. C. Ibe, Markov Processes for Stochastic Modeling (Elsevier, 2013).
  • Gaines and Lyons [1997] J. G. Gaines and T. J. Lyons, “Variable Step Size Control in the Numerical Solution of Stochastic Differential Equations,” SIAM J. Appl. Math. 57, 1455–1484 (1997).
  • Mauthner [1998] S. Mauthner, “Step size control in the numerical solution of stochastic differential equations,” J. Comput. Appl. Math. 100, 93–109 (1998).
  • Burrage and Burrage [2002] P. M. Burrage and K. Burrage, “A Variable Stepsize Implementation for Stochastic Differential Equations,” Soc. Ind. Appl. Math. 24, 848–864 (2002).
  • Lamba [2003] H. Lamba, “An adaptive timestepping algorithm for stochastic differential equations,” J. Comput. Appl. Math. 161, 417–430 (2003).
  • Burrage, Herdiana, and Burrage [2004] P. Burrage, R. Herdiana, and K. Burrage, “Adaptive stepsize based on control theory for stochastic differential equations,” J. Comput. Appl. Math. 170, 317–336 (2004).
  • Lamba, Mattingly, and Stuart [2006] H. Lamba, J. C. Mattingly, and A. M. Stuart, “An adaptive Euler-Maruyama scheme for SDEs: Convergence and stability,” IMA J. Numer. Anal. 27, 479–506 (2006).
  • Sotiropoulos and Kaznessis [2008] V. Sotiropoulos and Y. N. Kaznessis, “An adaptive time step scheme for a system of stochastic differential equations with multiple multiplicative noise: Chemical Langevin equation, a proof of concept,” J. Chem. Phys. 128, 014103 (2008).
  • Ilie [2012] S. Ilie, “Variable time-stepping in the pathwise numerical solution of the chemical Langevin equation,” J. Chem. Phys. 137, 234110 (2012).
  • Stuart and Humphries [1996] A. M. Stuart and A. R. Humphries, Dynamical Systems and Numerical Analysis, Cambridge Monographs on Applied and Computational Mathematics No. 2 (Cambridge University Press, Cambridge ; New York, 1996).
  • Honeycutt [1992] R. L. Honeycutt, “Stochastic Runge-Kutta algorithms. I. White noise,” Phys. Rev. A 45, 600–603 (1992).
  • Fixman [1986] M. Fixman, “Implicit algorithm for Brownian dynamics of polymers,” Macromolecules 19, 1195–1204 (1986).
  • Iniesta and de la Torre [1990] A. Iniesta and J. de la Torre, “A Second-order algorithm for the simulation of the Brownian dynamics of macromolecular models,” J. Chem. Phys. 92, 2015–2018 (1990).
  • Heyes and Brańka [2000] D. M. Heyes and A. C. Brańka, “More efficient Brownian dynamics algorithms,” Mol. Phys. 98, 1949–1960 (2000).
  • Note [1] This issue has been discussed in https://github.com/SciML/DiffEqNoiseProcess.jl/pull/80.
  • Rackauckas and Nie [2017b] C. Rackauckas and Q. Nie, “DifferentialEquations.jl – A Performant and Feature-Rich Ecosystem for Solving Differential Equations in Julia,” J. Open Res. Softw. 5, 15 (2017b).
  • Note [2] The corresponding implementation in C++ can be found in https://gitlab.uni-bayreuth.de/bt306964/mbd.
  • Yeomans-Reyna et al. [2003] L. Yeomans-Reyna, H. Acuña-Campa, F. d. J. Guevara-Rodríguez, and M. Medina-Noyola, “Self-consistent theory of collective Brownian dynamics: Theory versus simulation,” Phys. Rev. E 67, 021108 (2003).
  • Archer, Hopkins, and Schmidt [2007] A. J. Archer, P. Hopkins, and M. Schmidt, “Dynamics in inhomogeneous liquids and glasses via the test particle limit,” Phys. Rev. E 75, 040501 (2007).
  • Hopkins et al. [2010] P. Hopkins, A. Fortini, A. J. Archer, and M. Schmidt, “The van Hove distribution function for Brownian hard spheres: Dynamical test particle theory and computer simulations for bulk dynamics,” J. Chem. Phys. 133, 224505 (2010).
  • Stopper, Roth, and Hansen-Goos [2016] D. Stopper, R. Roth, and H. Hansen-Goos, “Structural relaxation and diffusion in a model colloid-polymer mixture: Dynamical density functional theory and simulation,” J. Phys. Condens. Matter 28, 455101 (2016).
  • Stopper et al. [2018] D. Stopper, A. L. Thorneywork, R. P. A. Dullens, and R. Roth, “Bulk dynamics of Brownian hard disks: Dynamical density functional theory versus experiments on two-dimensional colloidal hard spheres,” J. Chem. Phys. 148, 104501 (2018).
  • Treffenstädt and Schmidt [2021] L. L. Treffenstädt and M. Schmidt, “Universality in Driven and Equilibrium Hard Sphere Liquid Dynamics,” Phys. Rev. Lett. 126, 058002 (2021).
  • Fortini et al. [2016] A. Fortini, I. Martín-Fabiani, J. L. De La Haye, P.-Y. Dugas, M. Lansalot, F. D’Agosto, E. Bourgeat-Lami, J. L. Keddie, and R. P. Sear, “Dynamic Stratification in Drying Films of Colloidal Mixtures,” Phys. Rev. Lett. 116, 118301 (2016).
  • Laurati et al. [2012] M. Laurati, K. J. Mutch, N. Koumakis, J. Zausch, C. P. Amann, A. B. Schofield, G. Petekidis, J. F. Brady, J. Horbach, M. Fuchs, and S. U. Egelhaaf, “Transient dynamics in dense colloidal suspensions under shear: Shear rate dependence,” J. Phys. Condens. Matter 24, 464104 (2012).
  • Jahreis and Schmidt [2020] N. Jahreis and M. Schmidt, “Shear-induced deconfinement of hard disks,” Colloid Polym Sci 298, 895–906 (2020).
  • Archer and Malijevský [2011] A. J. Archer and A. Malijevský, “On the interplay between sedimentation and phase separation phenomena in two-dimensional colloidal fluids,” Mol. Phys. 109, 1087–1099 (2011).
  • Royall et al. [2007] C. P. Royall, J. Dzubiella, M. Schmidt, and A. van Blaaderen, “Nonequilibrium Sedimentation of Colloids on the Particle Scale,” Phys. Rev. Lett. 98, 188304 (2007).
  • Bleibel et al. [2011] J. Bleibel, S. Dietrich, A. Domínguez, and M. Oettel, “Shock Waves in Capillary Collapse of Colloids: A Model System for Two-Dimensional Screened Newtonian Gravity,” Phys. Rev. Lett. 107, 128302 (2011).
  • Carpen and Brady [2005] I. C. Carpen and J. F. Brady, “Microrheology of colloidal dispersions by Brownian dynamics simulations,” J. Rheol. 49, 1483–1502 (2005).
  • Löwen, Hansen, and Roux [1991] H. Löwen, J.-P. Hansen, and J.-N. Roux, “Brownian dynamics and kinetic glass transition in colloidal suspensions,” Phys. Rev. A 44, 1169–1181 (1991).
  • Archer, Rucklidge, and Knobloch [2015] A. J. Archer, A. M. Rucklidge, and E. Knobloch, “Soft-core particles freezing to form a quasicrystal and a crystal-liquid phase,” Phys. Rev. E 92, 012324 (2015).
  • Archer, Rucklidge, and Knobloch [2013] A. J. Archer, A. M. Rucklidge, and E. Knobloch, “Quasicrystalline Order and a Crystal-Liquid State in a Soft-Core Fluid,” Phys. Rev. Lett. 111, 165501 (2013).
  • Volpe, Gigan, and Volpe [2014] G. Volpe, S. Gigan, and G. Volpe, “Simulation of the active Brownian motion of a microswimmer,” Am. J. Phys. 82, 659 (2014).
  • Farage, Krinninger, and Brader [2015] T. F. F. Farage, P. Krinninger, and J. M. Brader, “Effective interactions in active Brownian suspensions,” Phys. Rev. E 91, 042310 (2015).
  • Paliwal et al. [2017] S. Paliwal, V. Prymidis, L. Filion, and M. Dijkstra, “Non-equilibrium surface tension of the vapour-liquid interface of active Lennard-Jones particles,” J. Chem. Phys. 147, 084902 (2017).
  • Paliwal et al. [2018] S. Paliwal, J. Rodenburg, R. van Roij, and M. Dijkstra, “Chemical potential in active systems: Predicting phase equilibrium from bulk equations of state?” New J. Phys. 20, 015003 (2018).
  • Omar et al. [2021] A. K. Omar, K. Klymko, T. GrandPre, and P. L. Geissler, “Phase Diagram of Active Brownian Spheres: Crystallization and the Metastability of Motility-Induced Phase Separation,” Phys. Rev. Lett. 126, 188002 (2021).
  • Turci and Wilding [2021] F. Turci and N. B. Wilding, “Phase Separation and Multibody Effects in Three-Dimensional Active Brownian Particles,” Phys. Rev. Lett. 126, 038002 (2021).
  • Baul and Dzubiella [2021] U. Baul and J. Dzubiella, “Structure and dynamics of responsive colloids with dynamical polydispersity,” J. Phys. Condens. Matter 33, 174002 (2021).
  • Gustafsson [1994] K. Gustafsson, “Control-theoretic techniques for stepsize selection in implicit Runge-Kutta methods,” ACM Trans. Math. Softw. 20, 496–517 (1994).
  • Söderlind [2002] G. Söderlind, “Automatic Control and Adaptive Time-Stepping,” Numer. Algorithms 31, 281–310 (2002).
  • Söderlind [2003] G. Söderlind, “Digital filters in adaptive time-stepping,” ACM Trans. Math. Softw. 29, 1–26 (2003).
  • Verlet [1967] L. Verlet, “Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules,” Phys. Rev. 159, 98–103 (1967).
  • Swope et al. [1982] W. C. Swope, H. C. Andersen, P. H. Berens, and K. R. Wilson, “A computer simulation method for the calculation of equilibrium constants for the formation of physical clusters of molecules: Application to small water clusters,” J. Chem. Phys. 76, 637–649 (1982).
  • Milstein [2003] G. N. Milstein, “Quasi-symplectic methods for Langevin-type equations,” IMA J. Numer. Anal. 23, 593–626 (2003).
  • Burrage, Lenane, and Lythe [2007] K. Burrage, I. Lenane, and G. Lythe, “Numerical Methods for Second-Order Stochastic Differential Equations,” SIAM J. Sci. Comput. 29, 245–264 (2007).
  • van Gunsteren and Berendsen [1982] W. van Gunsteren and H. Berendsen, “Algorithms for brownian dynamics,” Mol. Phys. 45, 637–647 (1982).
  • Brünger, Brooks, and Karplus [1984] A. Brünger, C. L. Brooks, and M. Karplus, “Stochastic boundary conditions for molecular dynamics simulations of ST2 water,” Chem. Phys. Lett. 105, 495–500 (1984).
  • Skeel and Izaguirre [2002] R. D. Skeel and J. A. Izaguirre, “An impulse integrator for Langevin dynamics,” Mol. Phys. 100, 3885–3891 (2002).
  • Goga et al. [2012] N. Goga, A. J. Rzepiela, A. H. de Vries, S. J. Marrink, and H. J. C. Berendsen, “Efficient Algorithms for Langevin and DPD Dynamics,” J. Chem. Theory Comput. 8, 3637–3649 (2012).
  • Leimkuhler and Matthews [2015] B. Leimkuhler and C. Matthews, Molecular Dynamics, Interdisciplinary Applied Mathematics, Vol. 39 (Springer International Publishing, Cham, 2015).

Appendix A Generalization to Langevin dynamics

In the following, we transfer the concepts of Sec. III to general (non-overdamped) Langevin dynamics. Thereby, the momenta of particles with masses m(i)m^{(i)}, i=1,,Ni=1,\dots,N, are explicitly considered in the Langevin equation

m(i)𝐫¨(i)(t)=𝐅(i)(𝐫N(t))γ(i)𝐫˙(i)(t)+2γ(i)kBT𝐑(i)(t)m^{(i)}\ddot{\mathbf{r}}^{(i)}(t)=\mathbf{F}^{(i)}(\mathbf{r}^{N}(t))-\gamma^{(i)}\dot{\mathbf{r}}^{(i)}(t)+\sqrt{2\gamma^{(i)}k_{B}T}\mathbf{R}^{(i)}(t) (24)

where the notation is that of eq. (1). Depending on the choice of γ(i)\gamma^{(i)}, two special cases can be identified. BD is recovered in the diffusive regime for large γ(i)\gamma^{(i)}. On the other hand, for vanishing friction and random forces, i.e. γ(i)=0\gamma^{(i)}=0, deterministic Hamiltonian equations of motion are obtained.

The SDE (24) can be reformulated as a pair of first-order equations for the particle positions 𝐫N\mathbf{r}^{N} and velocities 𝐯N=𝐫˙N\mathbf{v}^{N}=\dot{\mathbf{r}}^{N},

m(i)𝐯˙(i)(t)=𝐅(i)(𝐫N(t))γ(i)𝐯(i)(t)+2γ(i)kBT𝐑(i)(t),\displaystyle\begin{split}m^{(i)}\dot{\mathbf{v}}^{(i)}(t)&=\mathbf{F}^{(i)}(\mathbf{r}^{N}(t))-\gamma^{(i)}\mathbf{v}^{(i)}(t)\\ &\qquad+\sqrt{2\gamma^{(i)}k_{B}T}\mathbf{R}^{(i)}(t),\end{split} (25a)
𝐫˙(i)(t)\displaystyle\dot{\mathbf{r}}^{(i)}(t) =𝐯(t).\displaystyle=\mathbf{v}(t). (25b)

From a numerical perspective, the treatment of eqs. (25) differs significantly from that of the overdamped Langevin equation (1), since the second-order nature makes more involved timestepping procedures for 𝐫N\mathbf{r}^{N} and 𝐯N\mathbf{v}^{N} possible.

This can be illustrated easily when eqs. (25) are considered in the Hamiltonian case. Then, via the separation of 𝐫N\mathbf{r}^{N} and 𝐯N\mathbf{v}^{N}, simple semi-implicit methods are readily available, such as the Euler-Cromer or the well-known velocity Verlet method which is commonly used in MD [78, 79]. From a fundamental point of view, the use of symplectic algorithms is necessary for Hamiltonian systems to ensure the correct description of physical conservation laws and hence to achieve numerical stability for long-time behaviour. While the above mentioned integrators possess this property, most explicit algorithms like the forward Euler and classic Runge-Kutta method fail in this regard.

When going from Hamiltonian to Langevin dynamics, one is faced with an appropriate generalization of the familiar deterministic integration schemes to include dissipative and random forces. This is addressed by so-called quasi-symplectic methods, which are integrators for SDEs of type (25) that degenerate to symplectic ones when both the noise and the friction term vanish [80]. Still, it is not straightforward to construct schemes that are suitable for arbitrary γ(i)\gamma^{(i)}, and different strategies have been used in Langevin-based molecular simulations [81, 82, 83, 84, 85, 86].

To incorporate adaptive timestepping, one must again choose an integrator pair of different order to calculate an error estimation per step. The above considerations suggest however, that at least for the higher-order method, a quasi-symplectic scheme is advisable for optimal efficiency and accuracy as it is expected to perform better than a naive application of the embedded Heun-Euler method (8) and (9) to general Langevin dynamics. Nevertheless, the latter could still be an adequate option when friction dominates.

Then, with the formalism described in Sec. III, the acceptance or rejection of trial steps and the use of RSwM is straightforward, so that adaptive algorithms involving the Langevin equation (24) could be a feasible means for the simulation of dissipative particle dynamics in the future.

Appendix B Correctness of RSwM3 rejection branch

Although the effect is likely subtle or even unmeasurable in most situations, in the original RSwM3 algorithm as described in Ref. 24 and formerly implemented in DifferentialEquations.jl [50], the q<1q<1 branch was handled incorrectly. We list a pseudocode version thereof in alg. 1, where a specialization to our case of BD is already performed for ease of comparison with alg. 2.

To see why alg. 1 fails in some cases, consider multiple elements present on the stack SuS_{u}, e.g. as depicted in fig. 9 where SuS_{u} contains three elements. Then in lines 2–11 of alg. 1, elements are transferred successively from SuS_{u} back to SfS_{f} and their time intervals are accumulated in Δts\Delta t_{s} as long as the remainder ΔtΔts\Delta t-\Delta t_{s} is still larger than the next goal timestep qΔtq\Delta t (this is only the case for the green element in fig. 9). After that, the Brownian bridge theorem is applied to (ΔtK,𝐑K)=(ΔtΔts,𝐑𝐑s)(\Delta t_{K},\mathbf{R}_{K})=(\Delta t-\Delta t_{s},\mathbf{R}-\mathbf{R}_{s}). Therefore, the interpolation includes all the remaining elements of the stack SuS_{u}, i.e. the blue and violet one in fig. 9, which implies that intermediate values of the Brownian path are not considered if SuS_{u} holds two or more elements at this point.

To yield a valid interpolation in this situation, the Brownian bridge must instead only be applied to the immediately following single element at the top of SuS_{u} (the blue one in fig. 9). In alg. 2, lines 4–19 therefore replace the logic of alg. 1, which fixes the above issue by always considering only the single element that crosses qΔtq\Delta t in the application of eq. (7). This element is then interpolated at qΔtq\Delta t, for which the variable ΔtM\Delta t_{M} is introduced to yield the corresponding fraction qMq_{M} of the element. Note that SuS_{u} always contains at least one element when entering the rejection branch (assuming a proper initialization of the simulation, cf. Appendix C).

Algorithm 1 Former q<1q<1 branch of RSwM3 where the bridge theorem might be applied to a wrong interval
1:Δts0\Delta t_{s}\leftarrow 0, 𝐑s0\mathbf{R}_{s}\leftarrow 0
2:while SuS_{u} not empty do
3:     Pop top of SuS_{u} as (Δtu,𝐑u)(\Delta t_{u},\mathbf{R}_{u})
4:     if Δts+Δtu<(1q)Δt\Delta t_{s}+\Delta t_{u}<(1-q)\Delta t then
5:         ΔtsΔts+Δtu\Delta t_{s}\leftarrow\Delta t_{s}+\Delta t_{u}, 𝐑s𝐑s+𝐑u\mathbf{R}_{s}\leftarrow\mathbf{R}_{s}+\mathbf{R}_{u}
6:         Push (Δtu,𝐑u)(\Delta t_{u},\mathbf{R}_{u}) onto SfS_{f}
7:     else
8:         Push (Δtu,𝐑u)(\Delta t_{u},\mathbf{R}_{u}) back onto SuS_{u}
9:         break
10:     end if
11:end while
12:ΔtKΔtΔts\Delta t_{K}\leftarrow\Delta t-\Delta t_{s}, 𝐑K𝐑𝐑s\mathbf{R}_{K}\leftarrow\mathbf{R}-\mathbf{R}_{s}
13:qKqΔt/ΔtKq_{K}\leftarrow q\Delta t/\Delta t_{K}
14:𝐑bridge𝒩(qK𝐑K,(1qK)qKΔtK)\mathbf{R}_{\mathrm{bridge}}\sim\mathcal{N}(q_{K}\mathbf{R}_{K},(1-q_{K})q_{K}\Delta t_{K})
15:Push ((1qK)ΔtK,𝐑K𝐑bridge)((1-q_{K})\Delta t_{K},\mathbf{R}_{K}-\mathbf{R}_{\mathrm{bridge}}) onto SfS_{f}
16:ΔtqΔt\Delta t\leftarrow q\Delta t, 𝐑𝐑bridge\mathbf{R}\leftarrow\mathbf{R}_{\mathrm{bridge}}
Figure 9: A scenario is shown for which the original implementation of the RSwM3 rejection branch fails. For clarity, we only display the time intervals which reside on the stack SuS_{u} without their corresponding random vectors. In the presented case, the wrong interval (“is bridged”) is selected in the original RSwM3 implementation to which the Brownian bridge theorem (7) is applied. This is fixed in alg. 2, where the correct element (“should be bridged”) is considered via the calculation of ΔtM\Delta t_{M}.

Appendix C Implementation of embedded Heun-Euler trial step with RSwM3

The embedded Heun-Euler trial step with RSwM3 is the integral part of the adaptive BD method, for which we provide a pseudocode implementation in the following and explain its technical details as well as its setting in a full simulation framework. At the beginning of the simulation, SfS_{f} must be empty and a finite positive value of the first timestep Δt\Delta t is chosen heuristically (this choice is irrelevant, however, since the algorithm immediately adapts Δt\Delta t to acceptable values). Gaussian random increments are drawn for the initial trial step via 𝐑𝒩(0,Δt)\mathbf{R}\sim\mathcal{N}(0,\Delta t) which are pushed onto the stack SuS_{u}. This completes the initialization of the simulation run and a loop over trial steps can be started (until a certain simulation time or number of steps is reached).

Then, in each trial step, which is listed in alg. 2, the Euler and Heun approximations 𝐫¯k+1\bar{\mathbf{r}}_{k+1} and 𝐫k+1\mathbf{r}_{k+1} are evaluated via eqs. (8) and (9) and the adaptation factor qq is calculated via eqs. (16) and (17). This requires an error estimate (12) which is obtained from the two approximations via eqs. (10) and (11).

If q<1q<1, the proposed step is rejected and a retrial must be performed. For a detailed description of the rejection branch (lines 4–19), we refer to Appendix B, where our changes to the original RSwM3 q<1q<1 case are illustrated as well. If a step is accepted, i.e. q>1q>1, the particle positions and physical time of the system are updated accordingly (line 21) before resetting random increments 𝐑\mathbf{R} and stack SuS_{u} (line 22). Then, in lines 23–42, new random increments are constructed for the next trial step.

For this, elements on SfS_{f} stemming from previously rejected steps have to be accounted for as long as parts of them lie within the goal timestep Δt\Delta t. Therefore, the elements are transferred successively from SfS_{f} to SuS_{u} and their time intervals and random increments are accumulated in Δts\Delta t_{s} and 𝐑\mathbf{R}. If at some point Δts\Delta t_{s} exceeds Δt\Delta t, the corresponding element is interpolated at Δt\Delta t via the Brownian bridge theorem (7) in lines 29–34 and one proceeds with the next trial step. Conversely, if all elements of SfS_{f} were popped and a gap Δtgap\Delta t_{\mathrm{gap}} remains between Δts\Delta t_{s} and the goal timestep Δt\Delta t, new Gaussian random increments 𝐑gap𝒩(0,Δtgap)\mathbf{R}_{\mathrm{gap}}\sim\mathcal{N}(0,\Delta t_{\mathrm{gap}}) are drawn, added to 𝐑\mathbf{R}, and pushed onto SuS_{u} in lines 39–41 before attempting the next trial step. Note that the values of 𝐑\mathbf{R} persist across trial steps and are only reset after accepted moves.

Throughout the algorithm, bookkeeping of random increments and corresponding time intervals is established by pushing those elements that are involved in the current random increment 𝐑\mathbf{R} onto SuS_{u}, and by storing elements that become relevant beyond the current step on SfS_{f}. This procedure is especially important after bridging an element at the goal timestep, where the two resulting parts are pushed onto SuS_{u} and SfS_{f} respectively (lines 14, 15 and 31, 32). Consequently, no drawn random increments are ever lost.

Algorithm 2 Embedded Heun-Euler trial step with RSwM3
1:Calculate 𝐫¯k+1\bar{\mathbf{r}}_{k+1} and 𝐫k+1\mathbf{r}_{k+1} via eqs. (8) and (9)
2:Calculate qq via eqs. (10), (11), (12), (16) and (17)
3:if q<1q<1 then \triangleright cf. fig. 1, Appendix B
4:     Δts0\Delta t_{s}\leftarrow 0, 𝐑s0\mathbf{R}_{s}\leftarrow 0
5:     while SuS_{u} not empty do
6:         Pop top of SuS_{u} as (Δtu,𝐑u)(\Delta t_{u},\mathbf{R}_{u})
7:         ΔtsΔts+Δtu\Delta t_{s}\leftarrow\Delta t_{s}+\Delta t_{u}, 𝐑s𝐑s+𝐑u\mathbf{R}_{s}\leftarrow\mathbf{R}_{s}+\mathbf{R}_{u}
8:         if Δts<(1q)Δt\Delta t_{s}<(1-q)\Delta t then
9:              Push (Δtu,𝐑u)(\Delta t_{u},\mathbf{R}_{u}) onto SfS_{f}
10:         else
11:              ΔtMΔts(1q)Δt\Delta t_{M}\leftarrow\Delta t_{s}-(1-q)\Delta t
12:              qMΔtM/Δtuq_{M}\leftarrow\Delta t_{M}/\Delta t_{u}
13:              𝐑bridge𝒩(qM𝐑u,(1qM)qMΔtu)\mathbf{R}_{\mathrm{bridge}}\sim\mathcal{N}(q_{M}\mathbf{R}_{u},(1-q_{M})q_{M}\Delta t_{u})
14:              Push ((1qM)Δtu,𝐑u𝐑bridge)((1-q_{M})\Delta t_{u},\mathbf{R}_{u}-\mathbf{R}_{\mathrm{bridge}}) onto SfS_{f}
15:              Push (qMΔtu,𝐑bridge)(q_{M}\Delta t_{u},\mathbf{R}_{\mathrm{bridge}}) onto SuS_{u}
16:              break
17:         end if
18:     end while
19:     ΔtqΔt\Delta t\leftarrow q\Delta t, 𝐑𝐑𝐑s+𝐑bridge\mathbf{R}\leftarrow\mathbf{R}-\mathbf{R}_{s}+\mathbf{R}_{\mathrm{bridge}}
20:else
21:     Do step: tt+Δtt\leftarrow t+\Delta t, 𝐫k𝐫k+1\mathbf{r}_{k}\leftarrow\mathbf{r}_{k+1}, ΔtqΔt\Delta t\leftarrow q\Delta t
22:     Empty SuS_{u}, Δts0\Delta t_{s}\leftarrow 0, 𝐑0\mathbf{R}\leftarrow 0
23:     while SfS_{f} not empty do
24:         Pop top of SfS_{f} as (Δtf,𝐑f)(\Delta t_{f},\mathbf{R}_{f})
25:         if Δts+Δtf<Δt\Delta t_{s}+\Delta t_{f}<\Delta t then
26:              ΔtsΔts+Δtf\Delta t_{s}\leftarrow\Delta t_{s}+\Delta t_{f}, 𝐑𝐑+𝐑f\mathbf{R}\leftarrow\mathbf{R}+\mathbf{R}_{f}
27:              Push (Δtf,𝐑f)(\Delta t_{f},\mathbf{R}_{f}) onto SuS_{u}
28:         else\triangleright cf. fig. 3
29:              qM(ΔtΔts)/Δtfq_{M}\leftarrow(\Delta t-\Delta t_{s})/\Delta t_{f}
30:              𝐑bridge𝒩(qM𝐑f,(1qM)qMΔtf)\mathbf{R}_{\mathrm{bridge}}\sim\mathcal{N}(q_{M}\mathbf{R}_{f},(1-q_{M})q_{M}\Delta t_{f})
31:              Push ((1qM)Δtf,𝐑f𝐑bridge)((1-q_{M})\Delta t_{f},\mathbf{R}_{f}-\mathbf{R}_{\mathrm{bridge}}) onto SfS_{f}
32:              Push (qMΔtf,𝐑bridge)(q_{M}\Delta t_{f},\mathbf{R}_{\mathrm{bridge}}) onto SuS_{u}
33:              ΔtsΔts+qMΔtf\Delta t_{s}\leftarrow\Delta t_{s}+q_{M}\Delta t_{f}, 𝐑𝐑+𝐑bridge\mathbf{R}\leftarrow\mathbf{R}+\mathbf{R}_{\mathrm{bridge}}
34:              break
35:         end if
36:     end while
37:     ΔtgapΔtΔts\Delta t_{\mathrm{gap}}\leftarrow\Delta t-\Delta t_{s}
38:     if Δtgap>0\Delta t_{\mathrm{gap}}>0 then \triangleright cf. fig. 2
39:         𝐑gap𝒩(0,Δtgap)\mathbf{R}_{\mathrm{gap}}\sim\mathcal{N}(0,\Delta t_{\mathrm{gap}})
40:         𝐑𝐑+𝐑gap\mathbf{R}\leftarrow\mathbf{R}+\mathbf{R}_{\mathrm{gap}}
41:         Push (Δtgap,𝐑gap)(\Delta t_{\mathrm{gap}},\mathbf{R}_{\mathrm{gap}}) onto SuS_{u}
42:     end if
43:end if