Ulf D.
Schiller
a,
Timm
Krüger
b and
Oliver
Henrich
*c
aDepartment of Materials Science and Engineering, Clemson University, 161 Sirrine Hall, Clemson, SC 29634, USA
bSchool of Engineering, The University of Edinburgh, Edinburgh EH9 3FB, Scotland, UK
cDepartment of Physics, SUPA, University of Strathclyde, Glasgow G4 0NG, Scotland, UK. E-mail: oliver.henrich@strath.ac.uk
First published on 21st November 2017
The deformability of soft condensed matter often requires modelling of hydrodynamical aspects to gain quantitative understanding. This, however, requires specialised methods that can resolve the multiscale nature of soft matter systems. We review a number of the most popular simulation methods that have emerged, such as Langevin dynamics, dissipative particle dynamics, multi-particle collision dynamics, sometimes also referred to as stochastic rotation dynamics, and the lattice-Boltzmann method. We conclude this review with a short glance at current compute architectures for high-performance computing and community codes for soft matter simulation.
Nonlinear coupling mechanisms between different components of the physical model, such as the order-flow coupling in liquid crystals, occur frequently in soft matter. Sometimes they prevent accurate analytical solutions and make simulations more favourable, or even outright indispensable. For a quantitative understanding of the response and dynamical behaviour it is crucial to find a suitable coarse-grained description that manages to express a large number of degrees of freedom through a much smaller number of effective degrees of freedom whilst retaining the correct overall physical behaviour, therefore allowing the bridging of the time and length scales. An example is flowing soft matter. Quite generally speaking, flow is of particular importance due to the deformability of soft matter. Usually it can be assumed that the flow is incompressible and characterised by low Reynolds numbers. Hydrodynamic interactions, however, lead to long-ranged, collective interactions that are notoriously difficult to treat with analytical models.
These examples highlight only a few complications on the way to model soft matter. They have led to the formulation of specialised simulation methods which are the focus of this tutorial review. Due to space limitations we can cover here unfortunately only the most common and versatile methods. In the following article, we will present a short overview of these relatively recent developments and glance also at some typical applications of these simulation methods and make suggestions for further reading.
(1) |
〈Δri〉 = 0, | (2) |
(3) |
(4) |
(5) |
(6) |
(7) |
(8) |
(9) |
(10) |
(11) |
In this review, we therefore focus on mesoscopic methods that maintain a coarse-grained description of the solvent with explicit momentum transport. One can distinguish two broad classes of mesoscopic methods, namely particle-based and lattice models. Particle based-methods, such as dissipative particle dynamics (DPD) and multi-particle collision dynamics (MPC), represent the solvent by a system of interacting particles. The “coarse-graining” of the molecular details is achieved by implementing the interactions through collective collisions that satisfy the local conservation laws. Particle-based methods maintain a continuous phase space and thermal fluctuations are inherently present. DPD is essentially a momentum-conserving version of the Langevin thermostat and its algorithm is closely related to molecular dynamics based on Newton's equations of motion. In contrast, MPC is not developed as a time-discrete scheme for integrating a continuous equation of motion, but is based on discrete streaming and collision steps similar to Bird's direct simulation Monte Carlo (DSMC) method.10 Both DPD and MPC can be shown to satisfy an H-theorem.11–13 In addition, MPC can easily be switched from a micro-canonical ensemble to a canonical ensemble by augmenting the collision rule with a thermostat.14,15
Lattice models, such as finite-element models or the lattice Boltzmann method (LBM) represent the solvent by hydrodynamic fields on a discrete lattice. Thermal fluctuations can be reintroduced by means of stochastic collisions, if needed. In lattice models, the solvent viscosity (and other transport coefficients) are directly linked to simulation parameters and thus can be set without the need for calibration. Moreover, the LBM can be rigorously derived from kinetic theory which provides a systematic route to extending its applicability to, e.g., multiphase fluids or Knudsen flows.16,17
Both particle-based and lattice mesoscopic methods define a discrete dynamics that can be shown to reproduce Navier–Stokes hydrodynamics asymptotically. In this sense, the parameters of the coarse-grained model represent constitutive relations on the macroscopic level that can be tuned to reproduce the transport properties of the real physical system, cf. Section 1.2. Moreover, the methods allow implementation of boundary conditions and provide systematic means of coupling solute particles to the solvent medium. For these reasons, mesoscopic methods are perfectly suited to study complex phenomena in soft matter systems, both in and out of equilibrium.
(12) |
(13) |
(14) |
(15) |
(16) |
Arguably the most important dimensionless number characterising hydrodynamic flow is the Reynolds number that quantifies the ratio of inertial and viscous momentum transport. In simulations of soft matter and microflows, the Reynolds number is typically small such that nonlinear inertial effects do not play an important role. The flow velocity is usually a result of external fields, and hence the Reynolds number can be used to tune the parameters that determine the magnitude of the driving forces of the simulation in relation to the viscosity of the fluid.
In terms of standard dimensionless groups, the ratio of the Mach number and the Reynolds number is proportional to the Knudsen number that quantifies the importance of rarefaction effects that can occur in microflows and lead to deviations from continuum Navier–Stokes behaviour. Knudsen numbers beyond Kn ≳ 0.1 indicate that the flow is in the transition regime where non-continuum effects can become significant. The Péclet number quantifies the relative importance of convective and diffusive transport of solutes which is related to the importance of thermal fluctuations, i.e., for small Péclet numbers Brownian motion dominates hydrodynamic advection. In microflows, the ratio Pe/Re is typically very large and it may not be feasible to use a sufficiently large number of particles to reproduce the Péclet number. In such systems one can seek a compromise and simulate at a higher Reynolds number and/or lower Péclet number, as long as one stays in the correct hydrodynamic regime which should be carefully validated.
In order to determine the simulation parameters, it is also instructive to consider the time scales of interest.18 The main hydrodynamic time scales are the acoustic time scale τcs = L/cs, the viscous (kinematic) time scale τν = L2/ν, the diffusive time scale τD = L2/Ds, and the Stokes time scale τS = L/u. These time scales can be related to each other using dimensionless numbers
(17) |
(18) |
(19) |
The drag force depends on the relative separation rij = ri − rj and velocity vij = vi − vj of the beads,
(20) |
(21) |
Various schemes have been proposed to perform the time integration of eqn (18).26 The simplest integrator that can be considered a bare minimum requirement is based on a modified velocity-Verlet algorithm:
(22) |
ri(t + h) = ri(t) + hvi(t + h/2), | (23) |
(24) |
(25) |
Note the square root of the time step h which appears in eqn (22) and (24) when the stochastic Wiener process is discretised. In DPD the velocities vi at the end of the time step depend on the drag forces FDij which in turn depend on the relative velocities. Hence, rather than just taking the drag forces based on the intermediate velocities at the half time step, a number of flavours of the DPD algorithm solve eqn (25) in a self-consistent manner. In its simplest form the drag forces are recalculated once using the velocities vi(t + h) as obtained in eqn (25) and are then used in a final update of the velocities at t + h. This improves the performance significantly.
The local and pairwise interactions in DPD fulfil Newton's third law, conserve momentum and angular momentum, guarantee Galilean invariance and yield hydrodynamic conservation laws on larger length scales owing to the particle-based nature of the algorithm. A version of DPD with energy conservation can also be formulated.27 Together with a modified predictor-corrector algorithm for the integration22 or a self-consistent velocity-Verlet algorithm,28 DPD permits the use of larger time steps than atomistic MD modelling.
Because pairwise interactions have to be calculated, DPD is computationally relatively expensive and “slower” than other methods, such as multi-particle collision dynamics or the lattice-Boltzmann method. Another disadvantage of DPD is that momentum transport is tightly coupled to particle transport and the Schmidt number is typically very low. However, schemes for arbitrarily large Schmidt numbers have been presented.32
Further details on DPD can be found in the reviews by Nielsen,33 Moeendarbary23,24 and Liu.34
Another application area is electrohydrodynamic effects such as electro-osmosis and electrophoresis.36 The method was recently used to model slip boundary conditions close to hydrophobic surfaces. Interestingly, it was shown that both confinement and mobility of the surface charges have a dramatic effect on the hydrodynamic properties of the electric double layer and the electro-osmotic flow.37
DPD has been employed to simulate the dynamics and rheology of red blood cells (RBCs) suspended in blood plasma, cf.Fig. 1.38,39 These cells are essentially biconcave viscoelastic shells (membranes) made of a lipid bilayer and a cytoskeleton, filled with a Newtonian haemoglobin solution. Numerically, all fluid and solid components are discretised as DPD particles: the external blood plasma with a viscosity of about ηex ≈ 1.2 mPa s, the internal haemoglobin solution (ηin ≈ 6 mPa s), and the RBC membrane as a collection of DPD particles subject to additional viscoelastic forces. The membrane particles at locations {ri} form a triangular mesh and have the potential
V({ri}) = Vplane + Vbend + Vsurf + Vvol | (26) |
Fig. 1 Visualisation of aggregation of red blood cells. Simulated reversible rouleaux are formed by a low-dimensional model (upper) and multiscale model (lower). The left column corresponds to low shear rates, the centre column to moderate share rates, and the right column to high shear rates.38 Copyright (2011) National Academy of Sciences. |
ri(t + h) = ri(t) + hvi(t) | (27) |
vi(t + h) = uC(t) + Δi(t) | (28) |
Fig. 2 The basic algorithm of multi-particle collision dynamics consists of successive streaming and collision steps. The cell grid is shifted randomly every time step to restore Galilean invariance.12,41 |
The introduction of the collision cell introduces an artificial reference frame and breaks Galilean invariance. If the mean free path is smaller than the cell size a, repeated collisions between the same particles lead to correlations and the transport coefficients become dependent on imposed flow fields. In order to restore Galilean invariance, a random shift of the cell grid is performed before each collision step.12,41 In practice, the shift can be performed by moving all particles by a random vector s with components uniformly distributed in [−a/2,a/2] before the collision step and back after the collisions. The grid shift promotes momentum transfer between the cells and thus can lead to additional correlations in the transport coefficients.12,41 Huang et al.42 have carefully analysed the velocity correlations and the characteristic scales over which the correct hydrodynamic correlations and the long-time tail emerge.
The original multi-particle collision algorithm is referred to as stochastic rotation dynamics (SRD). It consists of a random rotation of the relative velocities vi,C = vi − uC of the particles in a cell11
ΔSRi(t) = v‖i,C(t) + v⊥i,C(t)cos(α) + (v⊥i,C(t) × â)sin(α) | (29) |
(30) |
A drawback of both MPC-SR−a and MPC-AT−a algorithms is that they generate a non-symmetric stress tensor and hence do not conserve angular momentum. This can be mitigated by imposing angular momentum conservation as a constraint, which leads to an additional term in the collision rule
(31) |
(32) |
These coarse-grained fields are the slow variables of the discrete-time dynamics that exhibit hydrodynamic behaviour on macroscopic time and length scales. The corresponding transport coefficients emerge from the micro-scale transport during streaming and collisions and hence contain both kinetic and collisional contributions. There are two possible routes to derive transport coefficients of the MPC fluid. The first uses a projection-operator formalism and relates the transport coefficients to equilibrium fluctuations of the hydrodynamic fields.45–47 In the second approach, transport coefficients are determined through analysis of non-equilibrium steady-state situations. By virtue of the fluctuation–dissipation theorem, the linear response of the system to imposed gradients allows transport coefficients that are identical to the ones obtained from equilibrium fluctuations to be calculated.43,48,49
The latter approach leads to generic expressions for the diffusion coefficient,
(33) |
(34) |
(35) |
(36) |
(37) |
A similar approach can be used to impose boundary conditions on suspended objects such as colloidal particles. A fluid particle that collides with the colloid acquires a new velocity where the normal and tangential components are drawn from Maxwellian-like distributions.52 In dense suspensions, there may be multiple collisions with more than one colloid (or a wall) during a time step, and repeated scattering is necessary to avoid an artificial depletion interaction between the colloids. The thermal wall approach is effective for suspended objects that are much larger than the mean free path λ.
For suspended objects with internal degrees of freedom, such as polymers or cells, a different coupling approach can be used. The particles are typically updated by a molecular dynamics scheme that takes into account intra- and inter-molecular forces. For the coupling to the fluid, the particles or monomers are considered point-like and interact with the fluid particles by means of the MPC collisions. The momentum exchange between the fluid particles and the monomers can be included as force in the molecular dynamics update. In order to accurately capture the hydrodynamic interactions, the average number of monomers per collision cell should be smaller than 1, and the monomers should be neutrally buoyant, i.e., the monomer mass mm on the order of solvent mass per cell ncm. Typically, the size a of the MPC cells has to be on the order of the bond length of the polymer. The MPC point-particle coupling has been used extensively to simulate polymeric and colloidal suspensions.40
Repulsive interactions between different species of a binary mixture can be achieved in a similar manner as for excluded volume interaction between sub-cells. However, instead of exchanging momentum between the entire sub-cells, only collisions between particles of type A and B are taken into account. Additional MPC collisions at the cell level are incorporated for momentum exchange between particles of the same type A or B which allows the overall viscosity to be tuned.53–55 The mixture model can be extended to amphiphilic and ternary fluids and allows the phase behaviour of complex fluids to be studied on large time scales. An alternative approach to binary mixtures is the introduction of a colour charge, similar to colour models in the LBM. The collision process takes into account the concentration of the species by exchanging momentum such that the colour-weighted momentum in each collision cell points in the direction of the colour gradient. While it can be used in two and three dimensions, the colour model does not include thermal fluctuations of the order parameter. Nonetheless, it has been shown that his model leads to phase separation, satisfies the Laplace equation and can be extended to simulate ternary mixtures.56–59
Viscoelastic behaviour can be modelled within MPC by using dumbbell springs instead of point particles. The MPC algorithm can still be performed in the usual manner, where in the streaming step the centre of mass of the dumbbells moves ballistically while the relative coordinates are updated according to the bond interaction. The collision step is applied to the end-points of the dumbbells and proceeds in the usual way for the various collision rules. An MPC fluid consisting of harmonic dumbbells can capture the orientation and elongation in shear flow and thus reproduces the viscoelastic behaviour of a Maxwell fluid.60 However, due to the possible infinite bond extension, harmonic dumbbells do not reproduce non-equilibrium properties such as shear-thinning. If FENE dumbbells are employed instead, the proper behaviour corresponding to a dilute polymer solution is found.61 Another alternative is to introduce a constraint of constant mean square bond length where the equilibrium value corresponds to Gaussian chains.62 The dumbbell fluid exhibits shear thinning, where at large shear rates ηb ∼ Wi−2/3 with the Weissenberg number Wi = kBT/(2DK0). Moreover, due to hydrodynamic interactions, the fluid exhibits a non-vanishing second normal stress with the same shear rate dependence.62
MPC can also be applied to non-equilibrium flows where analytical theories are still limited due to the lack of applicable variational principles. One particular example of a driven dissipative system is microfluidic droplets in a Hele-Shaw geometry. A train of microfluidic droplets can be modelled within 2D-MPC using a frictional coupling on disc-like droplets.69,70 The results confirm quantitatively that a far-field approximation of the dipolar hydrodynamic interactions remains valid even at high densities.69 Moreover, a confinement-induced coupling of longitudinal and transverse oscillations was discovered where the longitudinal motion of the droplets is induced by the boundary conditions of the flow field.69,70 In the presence of thermal fluctuations, the droplet oscillations exhibit instabilities that were also observed in simulations.70 MPC thus paves the way to systematic investigation of the governing principles of non-equilibrium systems using well controlled model systems that are at the same time promising to develop novel methods in statistical physics.
Another area of high interest where timely contributions have been made using MPC is the hydrodynamics of swimmers.71,72 Swimming behaviour can be produced by a number of mechanisms which differ in the characteristics of the flow field they produce and, consequently, the governing hydrodynamic interactions. For example, peritrichous bacteria use rotating helical flagella for self-propulsion, cf.Fig. 3. The mechanism of synchronisation and bundling of flagella has been studied using MPC.68,73,74 The dependence of characteristic times for synchronisation and bundling on the number of flagella, separation, and motor torque was found to be governed by hydrodynamic interactions. The swimming properties of a flagellar model bacterium consisting of a spherocylindrical body have also been simulated using MPC.74 These simulation models can be extended to study swimming near surfaces where confinement effects and non-equilibrium effects are expected to play an important role, similar to the microfluidic droplet systems described above.
Fig. 3 Synchronisation and bundling of helical flagella are governed by hydrodynamic interactions and can be modelled using an MPC fluid. Reproduced from ref. 68 with permission from the Royal Society of Chemistry. |
MPC is uniquely suited to studying inhomogeneous systems with temperature gradients. The investigation of thermodiffusion and thermophoresis using MPC was pioneered by Ripoll and co-workers.75–81 The flow field induced by thermophoretic motion of a colloidal particle in an MPC fluid was found to be Stokeslet-like for a fixed particle and dipolar for a freely drifting particle, cf.Fig. 4.75 The study of thermophoretic motion using MPC has inspired a number of applications, for instance, the induced flow field around a fixed colloidal particle gives rise to a net solvent flow which can be exploited as a microfluidic pump.75 Another interesting application is the case where the temperature gradient is generated locally by a solute particle, e.g., by inhomogeneous light absorption or inhomogeneous chemical reactions. This idea was subsequently used to design a thermophoretic nanoswimmer consisting of two linked monomers and self-phoretic Janus particles.76,78 Rotational motion due to thermophoresis was first demonstrated in MPC simulations of a micro-gear,79 and subsequently for a micro-turbine with anisotropic blades80 and a catalytic micro-rotor driven by diffusiophoresis in a concentration gradient.81
Fig. 4 Flow field around a freely drifting colloidal sphere in a temperature gradient. Reproduced from ref. 75 with permission from the Royal Society of Chemistry. |
Characteristic for LBM is the full discretisation of time, configuration and velocity space.89 The configuration and velocity discretisation are matched in such a way that the resulting algorithm becomes favourably simple and nearly local. These features lead to high numerical efficiency and parallelisability which are today extensively exploited throughout the soft matter community.
fi(x + cih,t + h) = fi*(x,t) = fi(x,t) + Ωi(x,t) + Si(x,t) | (38) |
Fig. 5 The lattice Boltzmann equation, eqn (38), can be understood as subsequent local collision and non-local propagation steps. Each arrow indicates the magnitude and direction of one of the nine distributions fi on a 2D lattice during each of the stages of the algorithm. Lattice nodes are shown as grey circles. |
The term Ωi in eqn (38) is the collision operator that describes the effective intermolecular interactions that modify the distribution function fi related to each lattice direction. It can be written in the general form
(39) |
(40) |
(41) |
The simplest choice of the eigenvalues is the single relaxation time approximation λk = τ−1, known as the lattice-BGK collision model named after Bhatnagar, Gross and Krook (BGK), or single relaxation time (SRT). The relaxation time τ is related to the kinematic viscosity ν via ν = (τ − h/2)cs2 with the speed of sound cs. The MRT model offers a larger number of free parameters than BGK; these parameters can be used to increase the accuracy and stability of the algorithm. MRT also allows different shear and bulk viscosities. Other collision operators are the so-called two relaxation times (TRT)93 or entropic LBM.94
The source term Si in eqn (38) is related to the local volumetric force F through95
(42) |
The density ρ, momentum density ρu, and momentum flux Π of the fluid are the zeroth, first, and second moments, respectively:86,92
(43) |
The momentum flux Π = Σ + σ contains the Euler stress Σ = ρuu + pI and the deviatoric stress tensor σ = ρν[∇u + (∇u)T]. The link between the LBE (eqn (38)) and the macroscopic Navier–Stokes equation can be established through a Chapman–Enskog expansion.83,86,92
A direct consequence of the discreteness of the underlying lattice is that Galilean invariance is broken in the LBM, which manifests itself as O(u2) errors in the viscosity. Hence the LBM is valid in the incompressible limit which corresponds to low Mach numbers. It is also possible to devise mitigating schemes.96,97 They tend, however, to take away some of LBM's simplicity.
In the standard LBM algorithm, hydrodynamic flow occurs without thermal fluctuations. A scheme of fluctuating LBM can be devised which satisfies a fluctuation–dissipation theorem.98 The thermalisation of the so-called ghost modes, the additional degrees of freedom that are not directly related to the ten hydrodynamic observables, lead to equipartition of the fluctuating energy on all length scales. This scheme has been recently extended from single phase fluids to binary mixtures.99
The bounce-back (BB) boundary conditions100 take direct advantage of the kinetic nature of LBM. As such, there is no counterpart in conventional Navier–Stokes solvers, showing the advantage of LBM for applications with complex geometries. A post-collision population fi* initially moving from a fluid site x to a solid site x + cih reverts its direction (denoted by cī = −ci) during the BB procedure, only to reach its origin at the end of the propagation step:
(44) |
BB leads to a no-slip boundary condition at a plane half-way between the fluid and the solid nodes with second-order accuracy.101 Despite its staircase-like discretisation that can lead to numerical artefacts, the BB method is stable, simple to implement and computationally efficient. It is often employed for colloidal particles and porous media.102,103 More sophisticated boundary conditions exist that allow boundaries at other locations than half-way between lattice nodes, for example interpolated BB104 or multireflection.105 We refer to a comprehensive review92 for further information on fluid-structure interactions for soft matter applications.
In order to model immersed soft objects, such as polymers106 and biological cells,107 the immersed boundary method (IBM)108 is commonly used. The basic idea of the IBM is to introduce a Lagrangian mesh that captures the shape of the immersed object and can move relatively to the Eulerian lattice of the fluid. The mesh and the lattice are coupled through velocity interpolation (Eulerian to Lagrangian) and force spreading (Lagrangian to Eulerian) steps. It is possible to equip the Lagrangian objects with suitable elastic properties, such as dilation, shear and bending resistance.
Several studies have been published about the combination of thermal fluctuations and immersed objects, such as colloids or polymers. Ahlrichs and Dünweg106 added thermal fluctuations to the particle phase and restored momentum conservation by including opposite fluid forces. In contrast, Ollila et al.109 claim that the addition of Langevin noise to the particle phase can be avoided, an idea that is closer to Einstein's original notion of Brownian noise. There does not seem to be a consensus under which conditions both approaches are equivalent or unphysical.
Several methods have emerged to date. The Shan–Chen (SC) model,110 for instance, is a phenomenological approach which is based on so-called pseudo-potentials that mimic the microscopic interactions between the constituents. For a fluid mixture with an arbitrary number of components, each component σ is subjected to the short-range interaction force
(45) |
An alternative approach is based on free energy (FE) models.111,112 In contrast to the SC method with its emergent surface tension properties, FE methods are based on a top-down approach, starting directly from the relevant target equation that describes the dynamics of the complex fluid. Therefore, as opposed to the SC model, thermodynamic consistency is straightforwardly realised in FE models. This is because the latter use the free energy functional as input, which means thermodynamic coupling terms between order parameters can be directly specified. Hence, FE models can be systematically extended to incorporate additional physical effects. They also provide better control of the thermodynamic state, an important aspect close to interfaces and boundaries. For example, the Landau free energy density for a binary mixture of two immiscible liquids with identical density ρ for both liquids can be written as113
(46) |
(47) |
More recently, hybrid approaches have emerged.114 In order to understand their novelty, it is necessary to realise that any additional equation of motion beside the Navier–Stokes equation that is involved in the dynamics of the complex fluid can be solved as well with an LBM-style method by defining additional sets of either scalar, vectorial or tensorial distribution functions. The corresponding observables are then obtained by taking moments of these distribution functions in just the same fashion. The equilibrium distributions are similar to eqn (41), but have different coefficients owing to the different target equation. A good example of this full-LB approach has been given for the dynamics of liquid crystals,115 where a partial differential equation for the tensorial order parameter, often referred to as , needs to be solved on top of the Navier–Stokes equation. Full-LB approaches can feature increased stability compared to alternative methods. But on the down side they require more memory and floating point operations per timestep.
Hybrid schemes do not define additional sets of distribution functions and use LBM only for the Navier–Stokes part of the problem. Any partial differential equation is solved with finite-difference schemes. In the above example of liquid crystals and based on a D3Q19 LB model, this reduces the memory requirements by almost 79%: instead of 19 distributions for the Navier–Stokes equation and 5 × 19 distributions for the 5 independent components of the in a full-LB approach, a hybrid model uses only 19 distributions for Navier–Stokes and holds the 5 independent components of the directly rather than reconstructing them from moments of the distribution. This, of course, is an extreme example due to the tensorial nature of the order parameter.
Like other models that involve phase boundaries or interfaces, both SC and FE models suffer from spurious currents in regions of large order parameter gradients. They can be reduced by higher-order isotropic schemes.116 We refer to the relevant literature85,86 for more details on multi-phase and multi-component mixtures.
With regard to multi-component systems like binary112 or amphiphilic mixtures119 of immiscible fluids, LBM has as well a number of advantages. The modelling of droplet coalescence or necking phenomena usually requires expensive interface tracking algorithms that can trigger numerical instabilities. In LBM, these systems can be modelled with two sets of LB distribution functions, or with one set of distributions and a level set in hybrid methods. The position of the interface is then simply defined by a value of the level set or by the isosurface where the densities of both fluids have the same value. These models have also been successfully applied to nanoparticle-stabilised emulsion (so-called Pickering emulsions and bijels, Fig. 6)118 and wetting phenomena.120
Fig. 6 3D visualisation of a bicontinuous interfacially jammed emulsion gel (bijel). Shown are the particles and the two fluids, respectively. The two pictures at the bottom depict the bicontinuouity of the fluids and the attachment of the particles to the interface.118 Reprinted with permission from Physical Review E. Copyright (2011) by the American Physical Society. |
Free energy models and their ability to express thermodynamic forces on solutes and solvents through gradients of chemical potentials come into their own when the composition and local structure of the complex fluid becomes increasingly intricate. This, for example, is the case in liquid crystals115 where the anisotropic local order structure is described through a tensorial order parameter and gives rise to spatially varying rheological properties. Hybrid approaches in favour of full-LB schemes have been used for large-scale mesoscopic simulations of 3D liquid crystalline structures at very low shear rates.121 Similar concepts have been used to study active liquid crystals and gels.114 They can be described through the same methodology with modified free energy functionals that account for the active components and additional forces.
Other interesting applications comprise charged soft matter systems and electrokinetic phenomena. An important step forward was the development of the link-flux method122 by which the leakage of charge between interfaces and surfaces could be prevented. The scheme has been used in the mesoscopic study of the electrophoresis of charged colloids123 and polyelectrolytes.124 Recently, the scheme has been improved to reduce spurious currents.125
The so-called rise of the machines has indeed transformed HPC from an exotic niche technology into an indispensable tool for science and engineering. In fact, since the beginning of this millennium, the computing power of the number one facility listed in the bi-annually published TOP500 has increased by more than a factor of 40000. With the advent of even more powerful graphics cards and many-core processors, an end to this astonishing development is not in sight.
At the same time, HPC architectures are rapidly diversifying and becoming more complex. This will make it even more difficult to program and harvest their computational power in the future. This development began about a decade ago and is directly linked to the breakdown of the so-called Dennard scaling. As the size and capacitance of integrated circuits decreased (and continues to decrease according to Moore's law), Dennard scaling also allowed them to operate at lower voltages. Hence, the power gains in terms of lower capacitance and operating voltage could be “invested” in a higher clock frequency, making processors ever faster whilst keeping the power consumption more or less constant. This design principle became unviable as currents in the devices have weakened and the operating voltage in the devices cannot be further reduced. Therefore, faster processors can only be created through more compute cores that run calculations in parallel.
In this paragraph, we will glance at the latest developments in the area of accelerators and coprocessors and describe the requirements that these increasingly heterogeneous computing architectures pose to the programmer. Finally, we will name a few examples of production codes used for soft matter research.
At the time of writing, the latest generation of Nvidia's Pascal architecture-based Tesla P100 GPUs delivers a peak performance of around 5 TFLOPs (5 × 1012 double-precision floating point operations per second) on 3584 CUDA cores at a power input of around 300 W, or 3–4 laptops. To put this into perspective, the fastest supercomputer in November 2000 was the ASCI White at Lawrence Livermore National Laboratory which achieved about the same performance with a power input of 3 MW (and another 3 MW for cooling, adding up to an electricity bill of roughly $6 million p.a.). The Tesla P100 also features 500+ GB per s memory bandwidth and a unified memory space to which both GPU and CPU can point. This alleviates the burden of having to copy data back and forth between CPU (host) and GPU (device), a process that uses the traditionally slowest part in a computer and came always at significant costs.
However, in order to unleash the power of the GPU, algorithmic parallelism has to be fully exposed and mapped to the architectural parallelism of the hardware on which the code is deployed. The so-called task or thread-level parallelism (TLP) that modern accelerators offer through their hundreds or thousands of compute cores working in parallel is only one way to expose parallelism. Another concept, which dates back to the early days of vector processors, is known as data parallelism. The basic idea behind it is that one single instruction processes a chunk of similar data elements at the same time. Today, virtually all types of processors, also multicore CPUs, use advanced vector extensions (AVX) which are extensions to the ×86 instruction set. These are also known as SIMD (Single Instruction Multiple Data) instructions. Obtaining a good performance on any kind of accelerator, co-processor or many-core chip depends increasingly on whether data parallelism can be exploited. Quite often this necessitates a complete redesign of the memory layout and memory access pattern, which can be prohibitive in the case of big legacy codes.
The KNL gets its performance from SIMD instructions on very large vector units. Hence, without exposing data parallelism, the performance will be noticeably degraded. In fact, data parallelism is probably even more crucial for MICs than for GPUs. That said, obtaining a consistently good performance with these devices, GPU or MIC coprocessors alike, when real-world problems rather than simplistic benchmarks are considered, remains generally challenging. Performance measurements gained with one and the same code can vary substantially depending on the individual scientific problem.
Nevertheless, MICs offer an unparalleled advantage over GPUs in terms of portability. In order to port a code to the GPU, it is often necessary to rewrite large parts. This frequently leads to more complex functionality and much longer source code compared to its CPU counterparts. Once the investment has been made, the performance gain can be impressive. But the code can be only deployed on GPUs, or when CUDA has been used even only on Nvidia GPUs.
Code written in standard programming languages such as C, C++ and Fortran using the most common abstractions for parallel programming, i.e., message passing interface (MPI) for distributed memory or OpenMP and even OpenACC for shared memory architectures, can be compiled for the Xeon Phi coprocessor. Hence, the code can run on a variety of architectures, ranging from multi-core CPUs in laptops over workstations, many-core processors and computing clusters to supercomputers with on-node co-processors. Although to obtain a good performance the same fundamental principles apply as for the GPU (exposing sufficient data parallelism and providing a cache-coherent memory layout and access pattern), the entry barrier is much lower and the portability significantly higher. Moreover, performance improvements made for the Xeon Phi will equally benefit when the code is run on simple multicore CPUs and vice versa.
A prime example of a community code is the large-scale atomic/molecular massively parallel simulator (LAMMPS),126 developed by Sandia National Laboratories, USA. LAMMPS is modular and relatively easy to extend, which makes it an attractive code to base individual research projects on. Its latest version features a vast number of models, particle types, force fields, ensembles and integrators. In fact, to our knowledge LAMMPS is the only code that contains an implementation of virtually all methods mentioned in this review. That said, each method comes naturally in a multitude of different flavours and modifications. The LAMMPS implementation contains only some of these features. The MPC model, for instance, uses the stochastic rotation dynamics version, whereas the LB model is an implementation of Ollila's above mentioned algorithm. The existing implementations form an excellent starting point for further development of LAMMPS, which is one of the reasons why new features are constantly being added to every new release. LAMMPS has specific strengths for coarse-grained modelling, making it suitable for soft matter research in general. The code shows very good performance up to millions of simulated particles and thousands of cores, and the number of multi-GPU- and many-core-enabled force fields and features is continuously growing.
Another highly versatile software package for performing many-particle molecular dynamics simulations, with special emphasis on coarse-grained models as they are used in soft matter research is ESPResSo (Extensible Simulation Package for RESearch on SOft matter).127,128 It is commonly used to simulate systems such as polymers, colloids, ferro-fluids and biological systems, for example DNA or lipid membranes. ESPResSo also contains a unique selection of efficient algorithms for treating Coulomb interactions. Recently, several grid based algorithms such as lattice Boltzmann and an electrokinetics solver have been added.
Another interesting code is HOOMD-blue,129 a general purpose multi-GPU enabled molecular dynamics simulation toolkit from the University of Michigan, USA. Besides Brownian, Langevin dynamics, DPD and a number of ensembles, the hard particle capabilities and a variety of shape classes for Monte Carlo simulations are particularly worth mentioning. Other important community codes are GROMACS130 and NAMD131 which are popular in the biomolecular community. They are both highly optimised and perform exceptionally well in term of parallel efficiency, but lack the general extensibility that for instance LAMMPS offers.
Excellent parallel performance is perhaps not trivial, but much easier to achieve for lattice Boltzmann codes. The algorithm requires only communication between nearest neighbours on a regular grid and is intrinsically parallel—unlike MD-type algorithms, where only the introduction of cutoffs, neighbour lists and sophisticated communication patterns renders the problem parallelisable. Nevertheless, a true LBM community code has so far not emerged. However, both OpenLB,132 developed at the Karlsruhe Institute of Technology, Germany, and Palabos,133 the open source project developed in a collaboration between the University of Geneva and FlowKit Ltd Lausanne, Switzerland, come probably closest to this endeavour. waLBerla,134 developed by the University of Erlangen-Nürnberg, is another LBM code with particular strengths in solving partial differential equations that are coupled to the fluid flow. Another open source LBM code specifically for soft matter is Ludwig.135 It is being developed in a collaboration between different European project partners in the UK, France, Switzerland and Spain and features a wide range of mesoscopic models for complex fluids and active matter.
As noted in the introduction, the mesoscopic methods are based on conservation of mass, momentum and energy. They differ in the way the conservation laws are translated into a coarse-grained representation, resulting in different degrees of freedom and different equations of motion for each method. This also results in specific limitations and advantages in each case.
Dissipative Particle Dynamics (DPD), the method which is probably closest to traditional molecular dynamics, is an off-lattice method which has the capability to resolve hydrodynamic space and time scales significantly larger than those in traditional MD. The particles in DPD represent whole atoms or regions of solvent atoms and interact via effective forces which conserve momentum locally and deliver the correct hydrodynamic behaviour even for relatively low particle numbers. DPD integration algorithms slightly differ from conventional MD integrators and take into account the specific symmetry requirements in order to fulfil local momentum or global energy conservation. DPD is particularly well suited for specific physicochemical interactions on the particle-level, e.g., for systematic coarse-graining of solvation effects, heat conduction and convection in nano-materials, or the simulation of biological membranes, macromolecules or multiphase systems subject to different flow conditions and geometries.
Multi-particle collision dynamics (MPC) implements discrete streaming and collision steps to update the positions and velocities of an ensemble of fluid particles. The molecular interactions are coarse-grained by sorting the particles into collision cells and applying a simplified collision rule to exchange momentum. The collision cells also function as averaging volumes to calculate local hydrodynamic fields. This makes MPC amenable to a kinetic description from which the hydrodynamic Navier–Stokes equations emerge. In this sense, MPC bridges between the particle view and the kinetic view of the fluid. The possibility to tweak the collision rule such that different thermodynamic ensembles can be reproduced makes MPC well suited to study transport phenomena under different ambient conditions. Heat transport due to temperature gradients and other non-equilibrium effects are important examples for the use of MPC.
The lattice Boltzmann method (LBM) is a lattice-based scheme that discretises time, space and velocity space. The commonly chosen discretisation leads to a simple and nearly local algorithm that naturally lends itself to parallel simulations. Due to its kinetic nature, complex and moving boundary conditions in the LBM are relatively straightforward to implement, when compared to conventional Navier–Stokes solvers. Since LBM operates with averaged particle distributions, it is especially suited for non-Brownian problems, e.g., non-colloidal particle suspensions, although fluctuations can be re-introduced. LBM is a popular method for multi-phase or multi-component problems, e.g., droplet dynamics or water/oil flow in porous media. Furthermore, it is straightforward to couple additional physics with the LBM algorithm, which enables applications such as liquid crystal dynamics or electrophoresis.
Interestingly, direct comparisons between the three above mentioned methods are scarce – at least we are not aware of any. This is perhaps a knowledge gap that the computational soft matter community may want to address in the future. Nevertheless, it is possible to draw a comparison between some more general aspects of each method.
A generic property of DPD is that both solute and solvent are modelled through coarse-grained particles that resemble each other closely. This characteristic feature makes the inclusion of specific solute–solvent interactions simple. These can also be modelled via effective coarse-grained force-fields that describe the averaged atomistic or molecular interactions on a larger length and time scale. Both DPD and MPC are fully thermalised methods and are natively connected to typical thermostatic or thermodynamic algorithms like Nosé–Hoover thermostats or Langevin dynamics. Whilst this is usually seen as an advantage, it allows only for fluctuating solutions of hydrodynamic problems. While fluctuations can be incorporated as well into LBM, arguably in a slightly more complicated manner, LBM permits primarily quasi noise-free, ballistic solutions. In the form of the Chapman–Enskog expansion, LBM offers a tool which allows all specific modifications of the microscopic dynamics to be directly related to a macroscopic target equation, providing quasi- or near-analytical insight into physical problems.
Mesoscopic modelling continues to be an area of active research, and multiphase fluids and non-equilibrium soft matter offer a range of interesting research problems. Moreover, we anticipate that mesoscale methods will play a central role in enabling extreme-scale applications on future HPC systems. We hope that this tutorial review will help the reader to choose the appropriate methods to address the mesoscale physics that is relevant in their research problem.
This journal is © The Royal Society of Chemistry 2018 |