Xiaocheng
Shang
*^{a},
Martin
Kröger
^{a} and
Benedict
Leimkuhler
*^{b}
^{a}Department of Materials, Polymer Physics, ETH Zürich, CH-8093 Zürich, Switzerland. E-mail: x.shang@mat.ethz.ch
^{b}School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh, Edinburgh, EH9 3FD, UK. E-mail: b.leimkuhler@ed.ac.uk
First published on 25th October 2017
We discuss the design of state-of-the-art numerical methods for molecular dynamics, focusing on the demands of soft matter simulation, where the purposes include sampling and dynamics calculations both in and out of equilibrium. We discuss the characteristics of different algorithms, including their essential conservation properties, the convergence of averages, and the accuracy of numerical discretizations. Formulations of the equations of motion which are suited to both equilibrium and nonequilibrium simulation include Langevin dynamics, dissipative particle dynamics (DPD), and the more recently proposed “pairwise adaptive Langevin” (PAdL) method, which, like DPD but unlike Langevin dynamics, conserves momentum and better matches the relaxation rate of orientational degrees of freedom. PAdL is easy to code and suitable for a variety of problems in nonequilibrium soft matter modeling; our simulations of polymer melts indicate that this method can also provide dramatic improvements in computational efficiency. Moreover we show that PAdL gives excellent control of the relaxation rate to equilibrium. In the nonequilibrium setting, we further demonstrate that while PAdL allows the recovery of accurate shear viscosities at higher shear rates than are possible using the DPD method at identical timestep, it also outperforms Langevin dynamics in terms of stability and accuracy at higher shear rates.
The state-of-the-art in molecular dynamics and its limitations are well documented in recent reviews.^{2,3} Let us briefly review the challenges of simulation of polymeric systems,^{4–7} which constitute a broad area of research of pharmaceutical, materials, chemical, biological and physical relevance, and an area where simulation times easily exceed available resources. While the system size required to avoid significant finite-size effects scales, with the polymerization degree N, as N^{3/2} for flexible chains and as N^{3} for stiff chains, respectively, the longest relaxation time scales as N^{2} for dilute systems and as N^{3} (or larger) for concentrated and entangled systems, respectively; the total simulation time is thus ∼N^{6} for a concentrated polymer solution, while a typical polymerization degree is N ≈ 10^{4}–10^{5} for a synthetic polymer or a biopolymer like hyaluronan. Such systems can be investigated qualitatively using coarse-graining strategies and multiscale modeling approaches of various kinds.^{8,9} However, atomistic simulation of polymeric systems is limited to the study of a few molecules, or concentrated, eventually semicrystalline systems containing simple polymers like polyethylene with N ≤ 2000 over a duration of a few tens of nanoseconds^{10–14} and subject to deformation or flow.^{15–17} Highly entangled polymeric systems with practically relevant N, and their confined counterparts like polymer brushes^{5,18–20} are still out of reach for atomistic simulations. The limitations are even more severe for polymers in nanocomposites,^{6,21,22} branched or hyperbranched polymers like dendronized polymers,^{20,23–26} and polyelectrolytes.^{27–29}
This article is addressed to stochastic simulation techniques for polymer models based on generalizations of Brownian or Langevin dynamics. The challenges of simulation are well exemplified by two model polymer systems: (i) the single polymer chain in implicit solvent, and (ii) a polymer melt. Both models combine aspects of sampling and dynamical approximation. In the case of a single polymer, the dynamics of the thermostat plays a crucial role in describing the relaxation behavior. It is important in this setting to mimic the underlying internal dynamical processes of the molecule while correctly modeling the exchange of energy between polymer and bath. For the melt, the key difficult quantities are rheological properties like shear viscosity or normal stress difference and disentanglement time, and the system dynamics is typically dominated by the frequent collisions of particles. We restrict our attention to the polymer melt case in this article, because it is computationally more demanding.
Designing effective algorithms involves the selection of a formulation or modeling framework, choice of parameterization, and design of numerical discretization. These choices cannot be isolated from each other. In practice the form of the numerical discretization is strongly dependent on the formulation used, and the choice of parameterization will depend on both the goals of simulation and the numerical scheme.
The common feature of all the most popular formulations in use is that they are designed to facilitate sampling in the canonical ensemble. That is, when applied to a conservative system with a total number of N_{t} particles and Hamiltonian energy function , they are designed to drive the system toward the canonical equilibrium state with probability density
ρ_{β}(q_{1}, q_{2},…, q_{Nt}, p_{1}, p_{2},…, p_{Nt}) = Z^{−1}exp(−βH) |
A wide range of different approaches can be designed to sample from the canonical distribution; each such formulation has certain desirable properties, but also certain limitations. For example it is well known that the dynamical response of a system simulated using Langevin dynamics will strongly depend on the friction coefficient. However, since Langevin dynamics relies on a strong assumption of scale separation, it is possible that there is no choice of the friction that gives a satisfactory dynamical approximation^{30–33} (see also discussions on time scales associated with Langevin dynamics^{33}). In some cases a “gentle” form of Langevin dynamics is used where internal relaxation modes within the polymer are the dominant feature of interest, or else one uses Nosé–Hoover^{34,35} or stochastic velocity rescaling,^{36–38} both of which are in some sense “gentle” alternatives to Langevin dynamics.^{39} Other approaches include the pairwise Nosé–Hoover thermostat^{40} and multiparticle collision dynamics.^{41} However, we restrict our attention to methods that are derived directly as discretizations of stochastic differential equations. In case the goal is only to sample the equilibrium structures of the system under study (as often needed in protein modeling^{42,43}), one may use the coefficient of friction as a free parameter and optimize the choice to enhance the rate of convergence to equilibrium or increase the effective sample size^{44} associated with a certain family of observables. Whereas standard molecular dynamics methods such as Langevin dynamics and its overdamped limit (Brownian dynamics) are appropriate for modeling systems in or near thermodynamic equilibrium, these methods do not take into account the possibility of an underlying flow, and are thus, unless modified, inappropriate for situations where the underlying flow of the system cannot be predicted beforehand (e.g., when dealing with interfaces or nonuniform flow).
More generally, as we coarse-grain the system, the hydrodynamic transport properties become increasingly important, which calls for a formulation that preserves momentum and Galilean invariance. A method which addresses these issues is the dissipative particle dynamics (DPD) method of Hoogerbrugge and Koelman^{45} which conserves momentum “exactly” (i.e., up to rounding errors) at each step. We find in our studies that the recently proposed pairwise adaptive Langevin (PAdL) method,^{46} which mimics DPD in the stationary setting, is preferable to DPD in nonequilibrium applications, e.g., for shear flows. The use of DPD or PAdL addresses a further problem with Langevin dynamics, namely the nonphysical screening of hydrodynamic interactions observed for Langevin dynamics.^{47}
Once the formulation is chosen, typically a set of stochastic differential equations (SDEs), it is necessary to replace it for computation with a discretized formulation, by applying a specific numerical method. Particularly for large, computationally demanding applications, the simulation must proceed at the largest possible timestep in order to provide meaningful answers to the questions of interest. Since we must work within a fixed computational budget, one should ultimately compare different numerical methods in terms of accuracy of observables for fixed computational work. On the other hand, if, as here, we wish to separate the numerical error (due to discrete approximation) from the statistical error (due to collecting finite numbers of samples) it is better to analyze these two types of error separately. This is especially true in a metastable system, i.e., one whose dynamics are limited by rare transitions, as for example protein folding or glassy system modeling,^{48,49} but generally speaking virtually any particle simulation will be subject to the timestep issue when pushed to deliver results on laboratory-relevant timescales. Thus we are led to seek methods that are optimized to deliver the desired properties in the most cost-effective manner, and the large timestep needed means that discretization bias becomes relevant. For molecular and mesoscale modeling, the most fundamental type of error incurred is the error in averages. We say that a method is accurate for long term averaging if the distribution generated by the numerical method converges in the limit of large time and small timestep to a stationary distribution which is close, in the sense of distributions, to the corresponding stationary distribution of the SDE.^{50,51} The other major source of error in large scale simulations can be viewed as the error due to insufficient sampling of a stochastic quantity. Such error is always present but may be difficult to estimate, since in practice we do not necessarily know the underpinning distribution a priori nor the normalization constant needed to compute a robust average. To put this another way, if we remain for our entire simulation time exploring certain states of the molecule, it might be the consequence of a strong local confinement rather than an indication that these are always the most important group of states. Analyzing and comparing numerical methods for sampling thus requires balancing the issues of accuracy and sampling efficiency and it is crucial to realize that different methods may have very different levels of bias and rates of convergence which are highly dependent on the choice of timestep. In this article we explore these issues in detail for model systems, calculating first the bias and using this to select timestep to control the attainable accuracy, then (for the specific choice of timestep) assessing the rate of convergence to equilibrium average.
The key findings of this article are as follows. First, we compare the performance of Langevin dynamics, DPD, and PAdL for several benchmark calculations, in particular showing that the PAdL method provides higher accuracy (for given computational budget) than Langevin dynamics and DPD, in both equilibrium and nonequilibrium applications, with careful study of the trade-off between numerical bias and convergence rate in simulation. Second, we compare our numerical schemes with exact results regarding the relaxation behavior for a benchmark model, thus clarifying the performance of the methods in dynamics-oriented modeling applications. Third, we develop a careful procedure to quantify the sampling efficiency of various methods by comparing the effective sample size. A fourth advance in the current article is the demonstration that PAdL allows the recovery of accurate shear viscosity using larger shear rates than otherwise are possible using DPD (at identical timestep) while PAdL outperforms Langevin dynamics in terms of stability and accuracy at higher shear rates. Finally, we emphasize that this article provides specific details regarding implementation of all the various methods which are often lacking in the literature.
The rest of the article is organized as follows. In Section 2, we review a variety of numerical methods in polymer melts simulation. We describe, in Section 3, various physical quantities that are used to evaluate the simulation performance of each method. Section 4 presents numerical experiments in both equilibrium and nonequilibrium cases, comparing the performance of numerous popular numerical methods in practical examples. Our findings are summarized in Section 5.
(1) |
σ^{2} = 2γk_{B}T. | (2) |
(3) |
dp_{i} = γmu_{i}dt − γp_{i}dt + σm^{1/2}dW_{i}, | (4) |
(5) |
(6) |
The equations of motion of the DPD system can be written as
We have observed^{62} that standard DPD methods perform similarly in all the quantities that we have tested. Therefore, following ref. 46, Shardlow's S1 splitting method (i.e., the DPD-S1 scheme)^{63} was used to represent the standard DPD formulation. As in Langevin dynamics, we can similarly define the DPD-S1 (OBAB) method as
(7) |
Due to the fact that the dissipative force depends on relative velocities, DPD is Galilean-invariant, which makes it a profile-unbiased thermostat (PUT)^{65,66} by construction and an ideal thermostat for nonequilibrium molecular dynamics (NEMD).^{54} The PUT allows the simulation itself to define the local streaming velocity (for more details, see ref. 65–67) and thus there is no need to additionally subtract the underlying streaming velocity in nonequilibrium applications.
The equations of motion of the momentum-conserving PAdL thermostat are given by
(8) |
(9) |
According to the invariant distribution (9), the auxiliary variable ξ is Gaussian distributed with mean γ and variance (βμ)^{−1}. That is, the auxiliary variable will fluctuate around its mean value during simulation and moreover we can vary the value of the friction in order to recover the dynamics of DPD in a wide range of friction regimes. Therefore, the PAdL thermostat can be viewed as the standard DPD system with an adaptive friction coefficient (i.e., an adaptive DPD thermostat). Furthermore, we point that the PAdL thermostat inherits key properties of DPD (such as Galilean invariance and momentum conservation) required for consistent hydrodynamics. Note also that the PAdL thermostat would effectively reduce to the standard DPD formulation in the large thermal mass limit (i.e., μ → ∞).
The splitting method of PAdL proposed in ref. 46 has been adopted in this article:
(10) |
(11) |
(12) |
(13) |
(14) |
(15) |
(16) |
(17) |
It should be noted that the SAC is closely related to the statistical error bar, i.e., the statistical error bar of the estimated mean is the standard deviation of the time series divided by the “effective sample size” defined as N_{s}/τ. Thus, the SAC is an estimate of the number of iterations, on average, for an independent sample to be drawn, given a correlated chain. Therefore, the SAC directly measures the efficiency of the sampling—a lower value of SAC corresponds to a larger effective sample size, i.e., a more efficient sampling.
(18) |
T = 〈f_{T}〉, | (19) |
Correlation functions, which characterize the relevant dynamical properties, are often studied in molecular dynamics. Of particular interest in polymer melts is the orientational autocorrelation function (OAF) of the end-to-end vector of polymer chains, which characterizes the relaxation of the polymer chains and is evaluated by choosing for f the end-to-end vector R^{(α)}_{ee}(t) of chain α, while all M chains contribute to C_{ee}(t) as individual samples. For this vector-valued f the product in eqn (14) is a scalar product. Due to head–tail symmetry 〈R^{(α)}_{ee}〉 = 0 and thus = 0 in that case.
u_{i} = (q_{i}·e^{y})e^{x} = κ·q_{i}, κ = e^{x} ⊗ e^{y} | (20) |
The Irving–Kirkwood stress tensor^{81} subject to LEBC can be written as
(21) |
(22) |
(23) |
Adjacent N beads along the same polymer interact, in addition, via the finitely extensible nonlinear elastic (FENE) potential:
Overall, the total potential energy of the system is defined as
(24) |
A simple and popular choice of the weight function as in ref. 58 was adopted in this article:
(25) |
A system (bead number density ρ_{d} = 0.84) consisting of M = 30 identical linear chains with N = 20 beads (unit mass m) on each chain was simulated, where the following parameter set was used: k_{B} = ε = r_{0} = m = 1 (defining reduced units) at T = 1 in reduced units. The thermal mass in PAdL was initially chosen as μ = 10. It should be noted that the simulated system is usually referred to as “unentangled” since N ≤ 85.^{85} Pre-equilibrated initial configurations were obtained using an existing hybrid approach:^{88} Initially, a mixture of phantom and excluded volume FENE chains were placed randomly into the simulation box at a density that exceeds the target density. A subsequent molecular dynamics algorithm with integration time step, force shape and force strength control was used to achieve a prescribed minimum distance (here 0.9) between all pairs of particles, while attempting to maintain local and global characteristics (such as the form factor) of the chain conformations. During this process the most inefficient chains were removed from the system, until the target density was reached. The initial momenta were independent and identically distributed (i.i.d.) normal random variables with mean zero and variance k_{B}T. Unless otherwise stated, the system was simulated for 1000 reduced time units in each case but only the last 80% of the snapshots were collected to calculate various quantities described in the preceding section.
Fig. 1 compares the configurational temperature (19) control for a variety of methods with a range of friction coefficients. Note that SVV and BAOAB are two different splitting methods of Langevin dynamics. However, it can be clearly seen that, while maintaining a similar accuracy control of the configurational temperature, the BAOAB method allows the use of much larger (at least doubled) stepsizes compared to the SVV method, especially in the large friction limit (γ = 40.5), where a superconvergence (i.e., a fourth order convergence, indicated by the dashed black line in the figure, to the invariant distribution) result was observed (as in ref. 50 and 69). All the other methods tested show second order convergence according to the dashed order line. This again illustrates the importance of optimal design of numerical methods. It is also interesting to note that the relative error slightly rises as we increase the friction coefficient for the SVV method whereas, in the BAOAB method, the relative error decreases.
Fig. 1 Double logarithmic plot of the relative error, i.e., the ratio between the absolute error of configurational temperature f_{T}(18) and the preset temperature T, against stepsize by using various numerical methods introduced in Section 2 with a variety of friction coefficients in a standard setting of polymer melts as described in Section 4.1. Note that the relative error of DPD and PAdL appears to show little dependence on the friction coefficients, thus the result of γ = 0.5 only is shown. The system was simulated for 1000 reduced time units in each case but only the last 80% of the snapshots were collected to calculate the static quantity f_{T}. Five different runs were averaged to further reduce the sampling errors. The stepsizes tested began at h = 0.005 and were increased incrementally by 10% until all methods became unstable. The horizontal solid black line indicates 1% relative error in sample mean (11) accuracy of configurational temperature, based on which the stepsizes for each method were chosen in equilibrium simulations, unless otherwise stated. Dashed black lines represent the second and fourth order convergence to the invariant distribution. |
While the relative error of both SVV and BAOAB methods depends on the friction coefficients in Langevin dynamics, the two pairwise thermostats (i.e., DPD and PAdL) appear to show little dependence on the friction coefficients. Although it seems that the DPD method is as accurate as SVV, the PAdL method is superior to both of them (even slightly better than BAOAB at low friction, i.e., γ = 0.5).
The accuracy and rates of convergence for each method (and for each observable) depend in a nontrivial way on stepsize and so we cannot expect to use the same stepsize for different numerical integrators. In performing comparisons, it is crucial to develop a careful procedure to quantify sampling convergence in relation to the accuracy desired. In our studies we use a fixed accuracy threshold to select the stepsize for each method (in equilibrium) and then, for this choice of stepsize, which will be different for each method, we use the configurational temperature SAC (see Section 3.1) to estimate the convergence rate. The detailed protocol is as follows:
1. Choose a suitable observable (for instance, the configurational temperature throughout this article);
2. Determine the stepsize, h, for each method by requiring an identical accuracy of the sample mean 〈f_{T}〉 (for instance, 1% relative error, marked as the horizontal solid black line, as shown in Fig. 1);
3. The number of samples, N_{s}, for each method is subsequently specified as the total simulation time is kept fixed;
4. Approximate the SAC viaeqn (16), which is based on a weighted sum fitting of the normalized autocorrelation function.
5. Calculate the effective sample size, N_{s}/τ_{T}, for each method, which characterize the sampling efficiency.
In order to measure the sampling efficiency of the various methods (three different values of the thermal mass, μ, of PAdL are included), we plot the normalized configurational temperature autocorrelation function (CTAF) and its corresponding fitted curve based on the weighted sum (15) in Fig. 2. The stepsize, corresponding to the ∼1% relative error case, for each method was determined with the help of the horizontal solid black line in Fig. 1. As can be seen from the normalized autocorrelation functions in Fig. 2, the samples of various methods decorrelate very differently. It is interesting to note that while both Langevin dynamics and DPD exhibit “monotonic-like” decays, PAdL (with a wide range of the thermal mass) oscillates, which we believe is due to the presence of the additional Nosé–Hoover control.
Fig. 2 Weighted sum fittings, employing (15), of the normalized configurational temperature autocorrelation function (14) by using various methods with fixed stepsizes shown in Table 1 in the low friction regime of γ = 0.5 with sample mean accuracy of ∼1% relative error in configurational temperature. Note that the horizontal axis is scaled to the reduced physical time, rather than the number of steps, for convenience. |
As can be seen from Table 1, the SAC (τ_{T}) of DPD is significantly larger than alternatives. Moreover, the SAC of BAOAB is about half of that of SVV, which is due to the fact that BAOAB can use about double the stepsize of SVV and thus only about half as many samples are needed to achieve a similar sample mean accuracy of the configurational temperature. Although the SAC of PAdL with μ = 10 is already smaller than either Langevin dynamics or DPD, the SAC of PAdL could be decreased further by lowering the value of the thermal mass μ. This should not come as a great surprise since μ determines how strongly the negative feedback loops (8) couple with the physical system. Therefore, the PAdL method has the ability of controlling the sampling efficiency while others not.
Method | h | N _{s} | 〈f_{T}〉 | I | τ _{ T } | N _{s}/τ_{T} |
---|---|---|---|---|---|---|
SVV | 0.005 | 160001 | 1.0105 | 0.4412 | 175.5 | 911.7 |
BAOAB | 0.01 | 80001 | 1.0134 | 0.4863 | 96.3 | 830.7 |
DPD | 0.004 | 200001 | 1.0093 | 1.1270 | 562.5 | 355.6 |
PAdL μ = 10 | 0.012 | 66668 | 0.9903 | 0.2703 | 44.1 | 1511.7 |
PAdL μ = 1 | 0.012 | 66668 | 0.9902 | 0.0959 | 15.0 | 4444.5 |
PAdL μ = 0.1 | 0.012 | 66668 | 0.9902 | 0.0249 | 3.2 | 20833.8 |
Since different stepsizes, which result in different sample counts (N_{s}) when the total simulation time is fixed, were used for different methods in order to achieve a similar sample mean accuracy, one should further compare the sampling efficiency by computing the “effective sample size” (N_{s}/τ_{T}) instead of just the SAC (τ_{T}), which corresponds to cases where N_{s} is identical in each method. One can see from Table 1 that the effective sample sizes of SVV and BAOAB are very close to each other since they are just two different splitting methods of the same stochastic (Langevin) dynamics. While the effective sample size of DPD is roughly half that of Langevin dynamics (either SVV or BAOAB), PAdL with μ = 10 is over 50% larger than the latter. Further reducing the value of the thermal mass in PAdL leads to an even more efficient sampling than alternatives: the effective sample size of PAdL with μ = 1 is more than four times that of DPD and Langevin dynamics; with μ = 0.1, this increases to a factor of 20. There is a limit to how much μ can be reduced, however, without introducing numerical instability and thus requiring a smaller timestep. Note that although the thermal mass μ in PAdL has a strong influence on the sampling efficiency, it appears that the sampling accuracy depends little on it (for instance, the long term behavior of PAdL with a wide range of the thermal mass μ is almost indistinguishable in Fig. 1). Therefore, unless otherwise stated, μ = 0.1 will be used in subsequent comparisons.
The characterization of the relaxation of polymer chains in a melt corresponding to different dynamics was compared and plotted in Fig. 3. In particular, we measured the orientational autocorrelation function (OAF) of the end-to-end vector of polymer chains defined in Section 3.3. It is believed that, for such a dense system, the long-time diffusion depends only on the interactions between beads and does not arise from the associated thermostat.^{53} Therefore, the reference decay was calculated by using Hamiltonian dynamics (i.e., switching off the thermostat, γ = 0). Since the OAFs obtained by SVV and BAOAB are almost indistinguishable, the SVV method was used for Langevin dynamics. (Note that in what follows, unless otherwise stated, Langevin dynamics was calculated by using the benchmark SVV method.)
It can be seen from Fig. 3 (left) that the OAF of Langevin dynamics depends strongly on the friction coefficient. To be more precise, the OAF starts to (significantly) deviate from the reference decay as we increase the friction coefficient. Although a relative small friction (i.e., γ = 0.5 in this parameter setting) was suggested in ref. 53 to not only minimize the effects of the Langevin thermostat but also to be large enough to stabilize the system in the long time limit, visible discrepancies were still observed in the case of γ = 0.5 in our numerical experiments. In stark contrast, the OAFs of the PAdL method in a wide range of the friction coefficients are almost indistinguishable from the reference decay as shown in Fig. 3 (right). Very similar behavior was also observed in the DPD method, which implies that the projection of the interactions of both the dissipative and random forces (i.e., the thermostat) on to the line of centers (and thus the conservation of the momentum) may have played a role in preserving the correct relaxation behavior in the case of the pairwise thermostats. Since a relatively small friction has been widely used in the literature of polymer melts, in what follows we restrict our attention to comparing various methods in the low friction regime of γ = 0.5.
The shear viscosity was extracted by using the formula (22) outlined in Section 3.4 and plotted in Fig. 4. All methods appear to show similar behavior except that the range of shear rates in DPD is greatly limited (that is, the largest usable shear rate in DPD is = 0.08, compared with = 1.2 in both Langevin and PAdL). The error bars were indeed included, however they were relatively very small (particularly in the large shear rate regime) and thus not visible.
Fig. 4 Comparisons of the computed shear viscosity in a standard setting of polymer melts as described in Section 4.1 against shear rate by using various methods (in the low friction regime of γ = 0.5) with Lees–Edwards boundary conditions. The simulation details of the figure are the same as in Fig. 1 except the same stepsize of h = 0.01 was used for all methods. Note that the error bars were included but could be seen with relatively small shear rates only. Note also that we did short runs to highlight the deviations, while the errors decrease further and the viscosity reaches the Newtonian plateau at small shear rates upon increasing the length of the runs, as is well known from previous studies. |
In Fig. 5, we plot the evolution of the shear stress against the shear strain for PAdL and Langevin. While PAdL was perfectly stable for all runs performed, Langevin dynamics occasionally becomes unstable with a shear rate of = 1, therefore, a smaller shear rate of = 0.5 was used here instead in order to provide comparisons. As can be seen from the figure, for both methods, as the shear strain increases, the shear stress rises rapidly from zero to its maximum at shear strain of around 2.5 before relaxing to its steady state, consistent with the results in Fig. 4. At startup the system tends to transform affinely and builds up stress before relaxation takes over. The faster the shear rate, the more likely the shear stress maximum occurs, as affine shear deformation. During this phase, particles are forced into close proximity resulting in strong Lennard-Jones repulsion and quickly producing enormous shear stress. However, the maximum is not expected, and was indeed not observed by us, at small rates. Overall, the behavior is in good agreement with previous studies.^{89–92} We also investigated the evolution (including DPD whenever possible) with a wide range of shear rates and did not observe significant differences between the methods.
Fig. 6 plots the standard deviation in the computed shear viscosity over a wide range of shear rates by using various methods subject to Lees–Edwards boundary conditions. Once again, all methods behave very similarly. As we increase the shear rate, the standard deviation decreases, which is consistent with the observations in Fig. 4. Note that the DPD method appears only in the smallest shear rate case. In the regimes of small and moderate shear rates, the standard deviations of all/both methods are very similar. However, in the high shear rate ( = 1) case, Langevin dynamics has not only a visibly (≈ 60%) larger standard deviation but also a smaller range of stepsizes usable than the PAdL method.
Fig. 6 Double logarithmic plot of the standard deviation in the computed shear viscosity at the specified rates against stepsize by using various methods (in the low friction regime of γ = 0.5) in the presence of Lees–Edwards boundary conditions. The simulation details of the figure are the same as in Fig. 1 except the stepsizes tested began at h = 0.01. |
We further investigate in Fig. 7 the stepsize effects on the alignment angle of the polymer chains subject to LEBC at a relatively high shear rate of = 1. It appears that while the absolute error of the alignment angle of PAdL remains below around 0.04 degrees, the corresponding error of Langevin dynamics is around six times larger. This clearly demonstrates the superiority of PAdL over Langevin dynamics in nonequilibrium simulations especially at relatively high shear rates.
Fig. 7 For the polymer melts modeled using Lees–Edwards boundary conditions, the absolute error (in degrees) of the flow alignment angle (23) produced by the PAdL and Langevin algorithms (in the low friction regime of γ = 0.5) is plotted against the stepsize (in semi-log scale) for a shear rate of = 1. The reference value of the alignment angle was obtained by using the PAdL method with a sufficiently small stepsize of h = 0.001 (20 different runs). The simulation details of the figure are the same as in Fig. 1 except 20 different runs were averaged to further reduce the sampling errors. |
In terms of sampling accuracy in equilibrium simulations, we have observed that the BAOAB and PAdL methods outperform the SVV and DPD methods in a wide range of friction coefficients. We have also discovered that while perfectly matching the reference decay, the OAF, which characterizes the orientational relaxation of the polymer chains, of both pairwise (momentum-conserving) thermostats (DPD and PAdL) has little dependence on the friction coefficient in a wide range. On the other hand, the OAF of Langevin dynamics strongly depends on the friction coefficient, moreover, a clear discrepancy was observed even for the commonly used low friction of γ = 0.5. We have further developed a careful procedure to quantify the sampling efficiency of various methods. By comparing the effective sample size, we found that PAdL substantially outperforms alternatives, particularly with a relatively small thermal mass of μ = 0.1, for which remarkably about twenty times increase in the effective sample size was achieved in comparison to alternative approaches (see Table 1).
We are more focused on investigating the stability issues in nonequilibrium simulations. We have demonstrated that, with a stepsize of h = 0.01, the largest usable shear rate was around = 0.08 for DPD, compared with = 1.2 for both Langevin and PAdL, in a standard setting of polymer melts as described in Section 4.1. Thus, in agreement with previous studies,^{61} DPD is not recommended for nonequilibrium simulations, when the mean flow dissipation rates begin to overwhelm the thermostat, limiting its use in practice for relatively large shear rates. Between Langevin dynamics and PAdL, we have found that they perform rather similarly with relatively low shear rates. For the investigation of even smaller rates, where the FENE polymer exhibits a Newtonian plateau in the shear viscosity, thermodynamically guided methods are more suitable.^{93,94} Nevertheless, we have illustrated that while both methods share a similar relation of shear stress and shear strain, Langevin dynamics performed unreliably with a relatively high shear rate of = 1 at a stepsize of at least h = 0.01—it not only produced a larger (by about 60%) standard deviation of the computed shear viscosity than PAdL, but also resulted in a significantly larger (about six times) absolute error of the flow alignment angle.
For each particle i,
p^{n+3/4}_{i} = p^{n+2/4}_{i} + hF^{C}_{i}(q^{n})/2, |
q^{n+1}_{i} = q^{n}_{i} + hv^{n+3/4}_{i}, |
p^{n+1}_{i} = p^{n+3/4}_{i} + hF^{C}_{i}(q^{n+1})/2, |
q^{n+1/2}_{i} = q^{n}_{i} + hv^{n}_{i}/2, |
p^{n+1/4}_{i} = p^{n}_{i} + hF^{C}_{i}(q^{n+1/2})/2. |
ξ^{n+1} = ξ^{n} + hG(q^{n+1/2},p^{n+2/4})/2, |
p^{n+1}_{i} = p^{n+3/4}_{i} + hF^{C}_{i}(q^{n+1/2})/2, |
q^{n+1}_{i} = q^{n+1/2}_{i} + hv^{n+1}_{i}/2. |
Footnotes |
† Note that the SAC is often referred to as the “integrated autocorrelation time”^{44,72,73} in the computational statistics literature. The problem with using such a term here relates to the fact that the time is a well-defined physical quantity whereas the formula quantifies a number of steps of an iterative procedure. Since we are mostly interested in how quickly the samples decorrelate in terms of the number of steps, we use the word “count” in order to avoid confusion with “physical time”. |
‡ It should be noted that MATLAB's “autocorr” function, despite its name, does not calculate the autocorrelation function (14), however, 1 + τ(k_{max}) is just two times the first 1 + k_{max} “autocorr” lags in MATLAB. |
This journal is © The Royal Society of Chemistry 2017 |