Open Access Article
This Open Access Article is licensed under a Creative Commons Attribution-Non Commercial 3.0 Unported Licence

Modeling the potential energy surface by force fields for heterogeneous catalysis: classification, applications, and challenges

Chenglong Qiu a, Tore Brinck *b and Jiacheng Wang *a
aZhejiang Key Laboratory for Island Green Energy and New Materials, School of Materials Science and Engineering, Taizhou University, China. E-mail: jiacheng.wang@tzc.edu.cn
bDepartment of Chemistry, CBH, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden. E-mail: tore@kth.se

Received 12th April 2025 , Accepted 20th October 2025

First published on 21st October 2025


Abstract

The concept of the potential energy surface (PES) in computational simulations is essential for studying material properties and heterogeneous catalytic processes. However, constructing the PES using quantum mechanical methods is computationally expensive and typically limited to small systems. Force field methods, which rely on quantum mechanical data, use simple functional relationships to establish a mapping between system energy and atomic positions or charges. Force field methods are more efficient for handling large-scale systems, such as catalyst structures, adsorption and diffusion of reaction molecules, and heterogeneous catalytic processes. To further promote in-depth research in this field, this review introduces the classification, development, and characteristics of various force field methods including classical force fields, reactive force fields, and machine learning force fields. It summarizes the forms, fitting methods, and distinct periods of these force field methods. Additionally, these force field approaches are compared in terms of their applicability, accuracy, efficiency, and fitting methods. Finally, the optimization and challenges of force field methods in constructing the PES are discussed. It is expected that this review will assist researchers in selecting and applying different force field methods more effectively to promote in-depth understanding of catalytic reaction mechanisms and the efficient design of catalysts.


image file: d5sc02715b-p1.tif

Chenglong Qiu

Chenglong Qiu received his Bachelor's degree from Zhejiang Normal University and his PhD from Zhejiang University of Technology. He is currently affiliated with Taizhou University. His research focuses on multiscale simulations of catalytic mechanisms and material properties, integrating first-principles calculations with large-scale molecular dynamics simulations.

image file: d5sc02715b-p2.tif

Tore Brinck

Tore Brinck is a Professor of Physical Chemistry at KTH Royal Institute of Technology, Stockholm. He received his PhD in Chemistry from the University of New Orleans in 1993. He has served as Head of the Department of Chemistry at KTH (2011–2017) and has made significant contributions to theoretical and computational chemistry. His research is focused on computational chemistry, particularly quantum chemistry, and involves method development and applications.

image file: d5sc02715b-p3.tif

Jiacheng Wang

Jiacheng Wang (FRSC) is currently a full professor at Taizhou University, Zhejiang, China, and head of Zhejiang Key Laboratory for Island Green Energy and New Materials. He obtained his PhD from Shanghai Institute of Ceramics, Chinese Academy of Sciences (SICCAS) in 2007. From 2013 to 2023, he was a professor at SICCAS. His present research focuses on rational design and preparation of advanced functional materials for energy transformation and photoluminescence.


1 Introduction

Computational simulation has emerged as a powerful tool for investigating chemical reaction processes and the physical and chemical properties of materials.1–6 The potential energy surface (PES), based on the assumption that electron and nuclear motions can be separated, is a crucial concept in computational simulations, representing the total energy of a system in a certain state.7,8 The PES is widely applied in fields such as physics and chemistry, particularly in their theoretical subfields. From a geometric perspective, the energy landscape is the plot of the energy function over the configuration space of the system. It is used to explore the properties of atomic structures, such as determining the minimum energy configuration of a molecule or calculating reaction rates.9,10 Additionally, in dynamic simulations based on Newton's laws, the force Fi exerted on each atom must be known at each time step for numerical integration of the equations of motion.11 This force can be derived from the PES by using the relation Fi = ∂E/∂ri, where the force is the negative gradient of the potential energy E with respect to the atomic position ri. Forces are also used in geometric optimization to identify the special structure of the system that corresponds to the critical point on the PES.12,13 For instance, a saddle point represents a transition state—the peak energy point along the reaction coordinate that determines the most energetically favorable path between reactants and products. The magnitude of the reaction energy barrier can be calculated as the energy difference between the saddle point and the two energy minima it connects. Therefore, the PES is an essential tool for analyzing reaction processes and predicting system evolution. The primary challenge lies in constructing the PES both efficiently and accurately.

Quantum mechanics (QM) and the force field method are the primary methods for constructing PESs (Fig. 1a). For smaller systems, PESs constructed by QM can accurately describe molecular properties, crystal structures and microscopic reactions. In recent years, QM-based simulation methods have gained increasing popularity, partly due to the development of some software packages that facilitate the generation of PESs. For example, periodic density functional theory (DFT) codes such as VASP and CP2K are flexible for extended systems, albeit requiring external automation or scripting for reaction-path sampling.14,15 And codes such as Q-Chem and Gaussian offer built-in tools for PES scans along reaction coordinates.16,17 This increased availability has proven particularly useful in material design, where QM often serves as a theoretical guide and screening tool.18,19 However, the computational cost inherent to QM-level calculations severely limits the scalability of simulations.20 In QM, the electronic structure and energy of a system are determined by solving the Schrödinger equation (Fig. 1b), with the analytical solution applicable only to two-body systems, such as hydrogen atoms. For multi-atom systems, several methods (e.g., semiempirical wavefunction,21 density functional theory,22 and CCSD(T)23) have been developed to approximately solve the Schrödinger equation. Despite these approximations, obtaining a precise numerical solution remains a computationally demanding task. For instance, CCSD(T), a high-precision ab initio method for electron correlation, scales ∝ N,7 with N being the number of atoms.24 Consequently, constructing a PES using QM to model the dynamic evolution of large chemical systems containing diverse molecules is impractical.


image file: d5sc02715b-f1.tif
Fig. 1 (a) Methods and applications for constructing potential energy surfaces using quantum mechanics and force field methods. (b) Principles of calculating system energy using quantum mechanics and force field methods. In quantum mechanics, the electronic structure and system energy are determined by solving the Schrödinger equation. In the force field method, the system energy is a mapping of atomic positions and charges. (c) Comparison of the accuracy and computational cost of various quantum mechanics and force field methods.

In contrast to QM, the force field method uses a simple functional relationship to establish the mapping between the system's energy and the atomic positions and charges (Fig. 1b). Compared to solving the Schrödinger equation, calculating system energy using the force field method is significantly less complex, allowing it to handle large-scale systems (e.g., polymers, biomolecules, and heterogeneous systems) more efficiently. The force field method dates back to 1924,25 when Jones proposed a molecular model involving a repulsive force λnrn and an attractive force λ3r−3. Building on this, the Lennard-Jones potential function was developed to describe interactions between non-bonded atoms or molecules.26,27 With the advancement of computational simulations, increasingly accurate forms of potential functions have been introduced. Based on their forms and the types of systems they apply to, current force fields are categorized into three types: classical force fields, reactive force fields, and machine learning (ML) force fields. The construction of a force field-based PES primarily relies on energy values calculated from discrete geometric configurations by QM, followed by fitting a PES using these discrete data points. Thus, the accuracy of the force field method is influenced by the quality of the QM calculations. Additionally, due to errors inherent in the fitting process, the force field method cannot achieve the precision of QM (Fig. 1c). Consequently, force field methods often trade computational cost for accuracy, enabling simulations at scales that are orders of magnitude beyond the reach of QM.28

Various types of force fields and their applications have been extensively reviewed in the literature. For examples, Wang et al. summarized the application of different force field methods in mechanism exploration and performance prediction of electrocatalysis.29 Thomas and Han et al. provided an overview of the development and application of the ReaxFF reactive force field.30,31 Oliver et al. presented a detailed mathematical and conceptual framework of ML force fields, along with their applications and the chemical insights they offer.28 Additionally, Cheng et al. elaborated on the principles and application of ML force fields, in conjunction with global optimization algorithms, to identify in situ active sites in heterogeneous catalysis.32 However, most reviews focus on the application of a single force field,28,30,33–38 and lack the comparison of different force fields, particularly in terms of fitting methods and their applicability to various systems.39–41 Furthermore, the principle and development of the PES constructed using force field methods are also inadequately summarized. Therefore, a comprehensive review is necessary. Summarizing the development trends of force field methods and the characteristics of different force fields can not only assist researchers in selecting the most suitable force fields for specific applications and provide valuable theoretical guidance for experimental work, but also foster innovation in new methods to accelerate progress in this field.

To this end, this review systematically summarizes the development and application of these three force field methods (classical force fields, reactive force fields and ML force fields), and compares them in terms of applicable systems, computational accuracy and cost, fitting methods, and other relevant aspects. For each force field, we discuss its general form in detail and outline the corresponding parameterization strategies. Examples of various applications are provided to illustrate the range of systems that can be modelled with the respective force field. Finally, we discuss future directions for improvements and potential challenges in constructing PESs using force field methods.

2 Classification and comparison of force fields (classical force fields, reactive force fields, and machine learning force fields)

2.1 Introduction to various force fields

According to the Born–Oppenheimer approximation, the energy of a molecule can be expressed as a function of the spatial coordinates of the individual atoms, each characterized by distinct atomic properties or parameters.7,42 Consequently, a molecular force field consists of two components: a functional form that describes the interactions between atoms and the force field parameters specific to each atom. Based on their functional forms and applicable systems, current force fields are categorized into three types: classical force fields, reactive force fields and ML force fields.

The classical force fields calculate a system's energy using simplified interatomic potential functions. This approximation is well-suited for modeling nonreactive interactions, such as bond stretching and angle bending (represented by harmonic functions), dispersion force (represented by the Lennard-Jones potential), and electrostatic interactions (represented by atomic charges). Currently, a variety of classical force fields with different simplified formulae have been developed for different types of molecular systems. Classical force fields typically contain between 10 and 100 parameters, which often possess clear physical meanings and are relatively easy to interpret (Table 1). However, due to the simplicity of their formulae, such descriptions are inadequate for modeling changes in atom connectivity such as bond breaking and formation during reactions, which also lead to a reduction in calculation accuracy. Despite this limitation, this method significantly accelerates computations (Fig. 1c). The simulation length scale can reach 10–100 nm for extended systems, with time scales ranging from tens to hundreds of nanoseconds, and occasionally extending to the microsecond range on modern hardware. Therefore, the classical force fields are particularly suitable for describing the motions of atoms or molecules driven by their interactions. Thermodynamic or kinetic properties such as adsorption, diffusion, dissolution, separation and stress–strain can be further studied through statistical analysis of the motion behavior of particles.43–45 For example, the calculation of the diffusion coefficient from molecular dynamics (MD) simulations involves simulating particle motion, extracting the time series of particle positions, and calculating the root mean square displacement (MSD) over time: image file: d5sc02715b-t1.tif. The diffusion coefficient is then extracted using Einstein's relation: image file: d5sc02715b-t2.tif.

Table 1 The parameter space complexity of classical force fields, reactive force fields and machine learning force fields
Force field type Typical number of parameters Parameter type diversity Interpretability Origin of parameters Optimization complexity
Classical force fields (e.g. UFF) 10–100 Mostly physical (e.g., bond lengths, angles, torsions, LJ terms, charges) High (each term corresponds to a physical quantity) QM datasets/experiments; empirical fitting Low (smooth, low-dimensional search space)
Reactive force fields (e.g. ReaxFF) 100–500 Mixed physical and empirical (e.g., bond-order coefficients, valence/overlap terms, van der Waals, charge equilibration) Moderate (some terms abstracted from physical meaning) QM datasets; targeted fitting Moderate (rugged parameter landscape with many cross-couplings)
Machine learning force fields (e.g. neural network potential) 103–106 (standard) Mathematical representations without fixed physical form (e.g., weights in NN layers, descriptor parameters) Low (features/descriptor hard to map to physical meaning) QM datasets; active learning High (non-convex, high-dimensional optimization often requiring large datasets)
>106 (emerging) Large-scale pretraining (e.g., OCP, MP); transfer learning


The limitations of the classical force fields in describing the reactive processes motivate the inclusion of connection-dependent terms, resulting in the development of reactive force fields. Bond order is a key concept in reactive force fields, describing the strength and properties of bonds between atoms. The bond order value is calculated using specific equations, and it is dynamically adjusted based on the relative positions of atoms and their local environment. Therefore, reactive force fields can describe the breaking and formation of chemical bonds, as well as the conversion between reactants and products, enabling the modeling of reaction processes on a large scale. Their parameters are derived from a combination of physical principles and empirical insights, often involving some degree of abstraction. The number of parameters typically ranges from 100 to 500. And reactive force fields can incur a relatively high computational cost (approximately 10–100× that of classical force fields). The typical simulation length scale for condensed-phase reactive systems is 5–20 nm, with accessible time scales of 1–10 nanoseconds for large systems. In particular, ReaxFF, the most widely used reactive force field, can simulate reaction events at the interfaces of solid, liquid, and gas phases because the parameters for each element in the force field can be transferred across different phases. For example, in the simulation of an oxygen evolution reaction (OER) catalyzed by a metal oxide, the oxygen atoms use the same parameters, whether they are in the gas phase as O2, in the liquid phase within an H2O molecule, or bound in a solid metal oxide. Additionally, since the catalytic process involves not only the reaction but also the migration of molecules, such transferability allows ReaxFF to simulate the influence of dynamic factors, such as solubility and diffusivity, on the catalytic process. As a result, ReaxFF can simulate systems involving multiphase complex processes.

Classical force fields and reactive force fields rely on predefined mathematical functions to describe atomic interactions, often lacking the flexibility and accuracy to model complex chemical environments. In contrast, ML force fields represent an emerging class of computational models that use ML algorithms to construct a system's PES. These data-driven approaches model the PES directly from data, enabling them to capture intricate interactions and chemical behaviors with high accuracy. The number of parameters typically ranges from 103 to 106 (for the emerging foundational/universal ML force fields, the number of parameters can even exceed 106), and they are represented through mathematical or ML models without fixed physical forms. ML force fields can achieve quantum-level accuracy within the training domain, but their performance is constrained by extrapolation limitations and computational scaling (often 10–100× the cost of reactive force fields). This high precision comes at the cost of greater computational expense compared to the other two force fields. To reduce the fitting cost, researchers apply transfer learning to emerging foundational or universal ML force fields by leveraging pre-trained models, such as those from the Open Catalyst Project (OCP) and Materials Project (MP). The simulation length scale depends strongly on descriptor efficiency, generally reaching 2–10 nm in current implementations. The typical time scale is 0.1–1 nanoseconds for extended systems, although linear-scaling ML force fields may extend this range. Nevertheless, ML force fields are more efficient than QM and are thus applicable to large-scale systems (Fig. 1c). For example, water and aqueous systems often involve complex interactions, such as hydrogen bonding, that classical force fields may not accurately capture. As an efficient alternative to QM, DeepMD employs neural networks to model complex atomic interactions and construct PESs to predict hydrogen bond networks, solvation effects, and the diffusion behavior of water.46,47 Due to their accuracy and efficiency, ML algorithms like neural networks, Gaussian processes, and ridge regression are increasingly applied in areas such as catalysis, materials science, and drug design.48–50 However, ML force fields also have limitations, including high fitting costs, poor model generalization, and lack of interpretability.

Considering the current state of force field development, we believe that no single method currently bridges all relevant catalytic scales (from atomic events at ∼ fs, Å to mesoscale ones at ∼ ms, μm). Therefore, researchers often adopt hybrid simulation strategies. For example, the QM/MM approach applies ML force fields or reactive force fields to the key reactive region, while describing the remaining parts with classical force fields.51 Another strategy is coarse-graining,41 in which coarse-grained parameters are derived from atomic-scale ML force fields or reactive force field trajectories, thereby extending simulations to the μs–ms regime or beyond. However, ML force fields augmented with active learning and hybrid coupling show the most promise for unifying length- and timescales in heterogeneous catalysis simulations.33,52

2.2 Development periods for force fields

After establishing a fundamental understanding of force field concepts and classifications, we will further explore the development of force field methods. This involves reviewing their evolution process, analyzing key advancements and breakthroughs across different periods, and identifying the underlying driving forces behind their continuous development to help us better understand the characteristics and applicability of various current force fields. Lennard-Jones, ReaxFF, and neural networks are selected as representatives of classical force fields, reactive force fields, and ML force fields, respectively. Fig. 2 shows the number of publications in molecular simulations that have utilized these approaches. Based on the trends in the number of publications over time, it can be found that there are three key nodes (2003, 2011 and 2017) in the development of force fields.
image file: d5sc02715b-f2.tif
Fig. 2 Publications in molecular simulations that have utilized Lennard-Jones (typical form in the classical force field), ReaxFF (typical form in the reactive force field) and neural networks (typical form in the machine learning force field).

Before the 21st century, force fields were typically based on simplified potential energy functions to describe atomic interactions. The classical force fields, represented by the Lennard-Jones potential, were introduced early on and proved suitable for most non-bonding interaction due to their simplicity. Building on this foundation, force fields were further developed to include additional physical details, such as bond stretching and bond angle bending, making them more applicable to molecular systems. Several classical force field models have been developed in this period, such as AMBER,53 CHARMM,54 OPLS,55etc. However, during this early stage, the body of related research was limited. With advancements in computational power and molecular simulation methods in the 21st century, classical force fields began to account for more complex multi-body effects and environmental factors, leading to their broader adoption. These force fields were further refined to include polarization effects between atoms, enhancing both their accuracy and applicability. Additionally, to better describe reactive processes, van Duin introduced the ReaxFF force field, based on the concept of bond order in 2001.56 Since then, with ongoing improvements in ReaxFF parameters, its transferability has become increasingly evident, making it widely used in fields such as heterogeneous catalysis, atomic deposition, organic molecule combustion, and liquid-phase chemistry after 2011.

The advent of powerful ML algorithms significantly enhanced the accuracy of PES construction. Although Blank et al. proposed the use of neural networks to represent PESs for molecule-surface scattering in 1995,57 the development of ML force fields progressed slowly due to computing power limitations. However, in recent years, with advances in technology, neural network potentials and other ML force fields have seen rapid growth in their application across chemistry, physics, and materials science. The evolution of these force fields reflects a shift from simplified models to more precise, comprehensive, and adaptable approaches. These advancements have not only driven research in computational chemistry and molecular simulations but also facilitated the application of these methods in fields such as chemical engineering and materials science.

2.3 Fitting process of force field parameters

The development and evolution of force fields are closely tied to the parameter fitting method, and accurate fitting techniques are crucial for the precision of the PES constructed using the force field method. Currently, many universal force field parameters exist, particularly in the classical force field, where various combinations of simplified formulae and corresponding parameters have been developed to suit different systems. However, for new materials and complex systems, the accuracy of universal parameters is often limited. In such cases, it becomes necessary to derive the force field parameters specific to the system through a fitting approach. The process of fitting force field parameters generally follows a standard workflow, which can be summarized in three main steps: (1) construction of the initial database, (2) selection of the force field form, and (3) training of the regression model, as shown in Fig. 3. By following these steps, the force field method can be applied to construct the PES for systems ranging from small to large scales.
image file: d5sc02715b-f3.tif
Fig. 3 General process for fitting force field parameters. Based on the initial database, a suitable regression model (gradient descent method, genetic algorithm, or neural network) is accordingly selected to derive the desired force field parameters of different force fields (classical force fields, reactive force fields, or machine learning force fields).

In the database construction phase, the structures and corresponding energies of the systems are obtained using an appropriate QM method. Since the simulation of PESs is both high-dimensional and complex, and the cost of acquiring high-precision data from QM is substantial, it is important to build the database with broad coverage while keeping the data set as representative as possible. Sometimes, experimental data are also included, such as vibrational frequency data obtained from infrared spectroscopy, which can be used to fit bond and angle parameters within the molecule. Additionally, experimentally measured system properties, such as solubility, elastic modulus, and crystallographic data, can be employed to calibrate the force field parameters. Regarding data volume, ML force fields typically require the largest amount of data, while classical force fields require the least. In addition to manually constructing databases, various training databases have been established during the development of force fields, providing high-quality reference data for accurately fitting the PES. Widely used examples include the QM9 dataset for small organic molecules, ANI datasets (ANI-1, ANI-2x) for organic chemistry coverage, and MD17 for molecular dynamics trajectories of small molecules. In materials science and catalysis, databases such as the MP, OCP, and NOMAD repository offer large-scale atomistic data for diverse material classes and catalytic systems. Additionally, specialized datasets such as the COMP6 benchmark set and the Alexandria database provide rigorous testbeds for evaluating model performance. These curated datasets form the foundation for developing transferable, data-driven force fields capable of capturing complex chemical and physical interactions across broad domains.

The choice of force field form depends on the simulation objectives: reactive force fields are preferred for simulating reaction processes, ML force fields are selected for high structural accuracy, and classical force fields are chosen for efficiency in simulations. The regression model training step aims to minimize the difference between the predicted and true values of energy and atomic forces, which is crucial for accurately fitting the force field parameters. The fitting quality is commonly assessed using metrics such as mean absolute error (MAE) or root mean square error (RMSE).

A growing number of benchmark studies have quantitatively compared classical, reactive, and ML force fields in terms of accuracy, generalizability, and computational efficiency. These studies reveal a clear trade-off between accuracy and interpretability, as well as between computational cost and flexibility. Classical force fields, such as Lennard-Jones or harmonic-bond models, typically yield MAEs of 0.5–2.0 eV in energies and >1 eV Å−1 in forces when benchmarked against QM references for reactive systems or surface chemistry.58 A notable exception is the recently developed GFN force field, which achieves energy MAEs below 0.2 eV.59 Reactive force fields, such as ReaxFF, generally outperform most classical models for reactive processes, with reported force MAEs in the range of 0.3–1.0 eV Å−1 and energy MAEs around 0.1–0.5 eV, depending on the quality and scope of training.56 However, their accuracy varies significantly across chemical systems, and parameter transfer is nontrivial. For example, benchmark studies such as that of Senftle et al. highlight ReaxFF's success in hydrocarbon combustion but also point to its failure in accurately capturing certain transition-metal surface reactions.30 ML force fields demonstrate superior performance across a range of systems. Widely benchmarked ANI-1x60 and sGDML61 models report energy MAEs below 20 meV per atom and force MAEs below 0.1 eV Å−1 for equilibrium and near-equilibrium configurations. More recent equivariant models like NequIP62 and Allegro63 have reduced force MAEs to below 0.03 eV Å−1 and energy MAEs <10 meV per atom on diverse datasets, including bulk phases, surfaces, and transition states. These models often approach DFT-level accuracy with orders-of-magnitude speedup. Nonetheless, these models depend heavily on the quality and diversity of the training data, and extrapolation to out-of-distribution chemical environments remains a challenge. Additionally, the OCP provides a large-scale, standardized benchmark for catalyst surfaces and reaction intermediates.52 In OCP leaderboard results, ML Force Fields such as GemNet,64 Allegro,63 and MACE65 outperform both classical and reactive models by a substantial margin, achieving force MAEs as low as 0.035 eV Å−1 on surface configurations relevant to catalysis, compared to >0.2 eV Å−1 for traditional force fields.

The accuracy and efficiency of different regression models vary. For classical force fields, which typically involve a small number of parameters, gradient descent methods can quickly fit the parameters. In contrast, reactive force fields, which involve a larger set of parameters, are more suited to genetic algorithms. ML force fields, using algorithms like neural networks, can approximate any continuous function with arbitrary accuracy. Their strong fitting capability ensures high precision in constructing PESs.

Furthermore, it may be possible to train each one of them on several out-of-equilibrium configurations to make them more robust when performing a more global exploration of the PES. In principle, each can be augmented with out-of-equilibrium training data; however, their ability to incorporate and benefit from out of equilibrium configurations differs. For classical force fields, parameters are usually fitted to equilibrium or near-equilibrium data. The functional forms are rigid and chemically specific, limiting the ability to generalize to far from equilibrium configurations. Extending coverage requires manual re-parameterization. For reactive force fields, parameters can be optimized against broader training sets, including strained geometries and transition states. However, the complexity of cross term interactions can introduce competing minima in parameter space, making it difficult to incorporate high energy configurations without sacrificing low energy accuracy. Achieving robustness away from equilibrium often requires multi-objective fitting and careful weighting of reactive versus non-reactive configurations. For ML force fields, the flexible functional form enables accurate interpolation across diverse regions of configuration space if trained on representative data. Active learning strategies and on-the-fly uncertainty detection can systematically identify and incorporate under-represented high-energy or rare configurations. The latest ML frameworks (e.g., equivariant graph neural networks62) can incorporate large, heterogeneous datasets without hard-coded chemistry assumptions, making them particularly suited to integrating off-equilibrium data without manual re-design of the potential.

2.4 Applicability and limitations of classical, reactive, and machine learning force fields in heterogeneous catalysis

In heterogeneous catalysis, the accurate modeling of active sites, surface reconstruction, charge transfer, and dynamic electronic effects remains a significant challenge. Different PES modeling approaches (classical, reactive, and ML force fields) offer varying capabilities in addressing these catalysis-specific complexities.

Classical force fields generally rely on fixed atom types, rigid bonding environments, and static charge assignments. While computationally efficient, they are not suited to capture phenomena such as bond rearrangement, changes in oxidation states, or surface reconstruction. So, their applicability is typically limited to non-reactive processes or systems where local electronic structure remains largely invariant.

Reactive force fields, exemplified by ReaxFF, incorporate bond-order-dependent interactions and dynamic charge equilibration schemes (e.g., QEq66 or ACKS2),67 enabling them to describe bond breaking and formation, as well as approximate charge redistribution. These features allow reactive force fields to model catalytic reactions, including proton/electron transfer, albeit with limited accuracy in systems involving subtle electronic effects or complex charge delocalization. Furthermore, their performance is often constrained by the parametrization strategy and the empirical nature of their functional forms.

ML force fields, trained on high-level QM data, exhibit improved flexibility and generality. When trained on sufficiently diverse datasets including charged systems, surface reconstructions, and transition-state geometries, ML force fields can implicitly learn to model charge transfer, hybridization, and active-site-specific reactivity. Architectures such as message-passing neural networks (e.g., DimeNet++,68 Allegro63) or equivariant graph neural networks (e.g., NequIP,62 MACE65) have demonstrated success in modeling chemically complex environments with high fidelity. Some models incorporate explicit charge awareness (e.g., PhysNet,69 SpookyNet70), while others rely on large training sets to implicitly encode charge redistribution. Despite their promise, many ML force fields still face limitations in modeling long-range electrostatics or variable charge states in heterogeneous systems, and ongoing efforts aim to combine ML potentials with electrostatic or embedding schemes to address these gaps.

Collectively, while reactive and ML force fields represent significant progress toward modeling catalytic complexity, no current approach offers a fully transferable solution across all catalytic scenarios. Future developments will likely benefit from hybrid frameworks that combine data-driven potentials with physical constraints and electrostatic corrections, tailored for the unique demands of catalysis.

Furthermore, in heterogeneous catalysis, the explicit modeling of surfaces introduces several unique challenges beyond those encountered in bulk or molecular systems. Long-range effects including dispersion forces, electrostatics, and image charge interactions play a crucial role in determining adsorption geometries, surface reconstructions, and reaction barriers. Accurately capturing dispersion interactions often requires empirical or semi-empirical corrections, such as the DFT-D scheme of Grimme,71 while classical force fields incorporate such effects via parameterized functional forms.72 For ML force fields, recent developments in E(3)-equivariant architectures have enabled more explicit incorporation of long-range information into the learned potential.62 Another layer of complexity arises from open-shell systems, which are common in transition-metal surfaces, magnetic oxides, and adsorbed radical intermediates. These systems require spin-polarized QM references because their electronic and magnetic states can change along reaction coordinates. Mapping spin-dependent interactions into reactive or ML force fields remains challenging; however, recent ML architectures have begun to address electronic degrees of freedom and spin explicitly. For example, SpookyNet introduces electronic-degree-of-freedom and nonlocal corrections that improve the description of charge and spin dependent effects.70 More recent works extend ML potentials to spin and magnetization dynamics, enabling ML modeling of magnetic materials and nonequilibrium spin forces.73,74 Incorporating these developments into ML force field workflows is important for accurate simulations of catalytic surfaces that exhibit open-shell character.

To provide readers with a deeper understanding of force fields, the following sections offer a detailed overview of three types, focusing on their functional forms, parameter fitting methods, and applications.

3 Classical force fields

3.1 Forms of classical force fields

For classical force fields, the functional forms typically include the bond term (bond stretching energy, angle bending energy, dihedral angle torsion energy, inversion energy) and the non-bond term (van der Waals term, Coulomb term). An example of a universal force field function is as follows:
 
image file: d5sc02715b-t3.tif(1)

The first term, bond stretching energy, describes the relationship between two bonded atoms. As atoms oscillate around the equilibrium bond length, a repulsive force is generated when they approach each other, and the energy increases as they move apart, eventually leading to bond dissociation when there is no longer any interaction between the atoms. The second term, bond bending energy, refers to the tension in bond angles and represents the relationship between three bonded atoms. Similar to bond stretching, the bond angle oscillates around an equilibrium angle. The third term, bond torsion energy, describes the interaction between four bonded atoms, specifically the function curve of the dihedral angle and its associated energy. Unlike bond stretching, dihedral angle torsion cannot be represented by simple harmonic functions for most cases. Instead, these periodic changes are typically modeled using Fourier series, which are combinations of trigonometric functions. The fourth term, the van der Waals energy, describes the interaction between non-bonded atoms, often represented by the Lennard-Jones potential. The 12th power term corresponds to repulsion (positive energy), while the 6th power term represents attraction (negative energy). At large distances, the 6th term dominates, creating attractive forces, while at short distances, the 12th term prevails, leading to repulsion. The fifth term, Coulomb energy, models the electrostatic interaction between charged atoms using the classical Coulomb formula, which is inversely proportional to the distance between atoms. For specialized systems, additional terms such as hydrogen bonding, torsional terms, and cooperative terms (such as the Urey–Bradley term in the CHARMM75 force field) may be incorporated to improve the precision of the PES.

Different force fields utilize various function forms but can generally be classified into three main types: periodic function-based, harmonic potential-based, and tabulated potential-based. Over time, numerous force field forms have been developed and applied to different systems (Fig. 4a). Examples include the AMBER76 and CHARMM75 force fields, which are designed for simulating biological systems; the OPLS force field,58 which focuses on simulating condensed phase properties; and the COMPASS force field,77 which can predict properties of both gaseous and condensed phases. The DREIDING78 and UFF72 force fields describe interactions for most elements in the periodic table, although with limited accuracy. Additionally, the newly developed GFN force field strikes a balance between computational efficiency and accuracy, enabling precise large-scale simulations of biomolecular and material systems.59


image file: d5sc02715b-f4.tif
Fig. 4 (a) Types and applications of classical force fields. (b) Derivation of force field parameters using the Seminario method [reprinted with permission from ref. 80. Copyright 2018 American Chemical Society]. (c) Numerical fitting methods for force field parameters: gradient descent and global optimization methods.

3.2 Fitting methods of classical force fields

There are three main sources of classical force field parameters: (1) wave number measured experimentally, from which force constants can be calculated using the wave number formula; (2) structural, energy, and frequency data obtained through high-precision calculations, which are then used for parameter derivation and fitting; and (3) force constants derived from the first two methods, which can be further refined using Badger's rule and linear regression. This paper primarily focuses on the second method for obtaining force field parameters.

The most widely used non-fitting method is the Seminario method (Fig. 4b), developed by Jorge Seminario in 1996. This method aims to derive the intramolecular force field parameters, such as the harmonic functions for bond stretching and angle bending, directly from the Hessian matrix obtained by QM calculations.79 Currently, several auxiliary tools support the use of this method to fit force field parameters, including the AMBER and AuToFF programs. Below is a brief overview of the method.

For a system with N atoms, the force δF of the 3N component due to a displacement δx can be expressed as a second-order Taylor series expansion:

 
δF = −[k]δx(2)
where tensor [k] is the 3N × 3N Hessian matrix. And the [k] used here is defined in unweighted Cartesian coordinates, without mass–weighting. This is because force field fitting aims to reproduce the second derivatives of the PES, which are independent of atomic masses. In contrast, vibrational frequencies and normal modes require mass–weighted and projected Hessians, which are not directly involved in the force field parameterization. Then, eqn (2) can be written as.
 
image file: d5sc02715b-t4.tif(3)

The tensor [k] represents the intramolecular force field up to second order for small displacements δx, and it can be obtained from QM calculation. The eigenvalues λi of [k] correspond to the 3N force constants, which are associated with the 3 translational, 3 rotational and 3N-6 vibrational modes of the molecule. By analogy with eqn (2), the force δFA = (δFxA, δFyA,δFzA) on atom A due to a displacement δrB = (δxB, δyB, δzB) of atom B is given by δFA = −[kAB]δrB. The tensor [kAB] can be written as

 
image file: d5sc02715b-t5.tif(4)

The bond stretching constant can be obtained from the projections of the eigenvectors onto the unit vector, ûAB:

 
image file: d5sc02715b-t6.tif(5)
where λABi and [v with combining circumflex]ABi are eigenvalues and eigenvectors, respectively.

For the derivation of angle bending constant kθ, first, a unit vector ûN is defined as perpendicular to the plane ABC:

 
image file: d5sc02715b-t7.tif(6)

The unit vectors perpendicular to the bonds AB and CB on the plane ABC can be obtained:

 
ûPA = ûN × ûAB(7)
 
ûPC = ûCB × ûN(8)

The force constants kPA and kPC are then defined as the corresponding constants derived by projecting the eigenvectors of the partial Hessian matrix onto these two vectors:

 
image file: d5sc02715b-t8.tif(9)
 
image file: d5sc02715b-t9.tif(10)

And the angle bending constant is then given by

 
image file: d5sc02715b-t10.tif(11)
where rAB and rCB are the two bond lengths.

Consider the dihedral defined by the atoms A, B, C, and D which are linked by bonds AB, BC, and CD. The approach used to determine the dihedral force constant is similar to that for bond and angle. The dihedral force constant is given by

 
image file: d5sc02715b-t11.tif(12)

The premise of the Seminario method for deriving force field parameters is that the change in energy associated with the displacement of atom A along the direction ûPA will only affect the A–B–C angle. However, in a complex system, neighboring angles may also be altered due to changes of the A–B–C angle. This often leads to an overestimation of bond angle parameters. To address this issue, Allen et al. modified the Seminario method by rescaling the value of kPA with a factor that accounts for the geometry of the molecule.80 In addition to the Seminario method, Hirao et al. proposed other rapid force field parameterization schemes, such as partial Hessian fitting (PHF), full Hessian fitting (FHF) and internal Hessian fitting (IHF). These methods are based on the idea of minimizing the Hessian matrix of molecular force fields and quantifying the differences in the Hessian matrix.81,82

Numerical fitting methods are commonly applied to fit van der Waals parameters. The process begins by scanning the PES generated from changes in the interatomic distance based on QM calculations. An appropriate van der Waals interaction expression (e.g., Lennard-Jones or Morse) is then selected and used to fit the PES. The fitting process involves iteratively adjusting parameters such as the potential well depth and zero-energy distance until the sum of the squared differences between the two PESs is minimized (eqn (13)). Once the optimization is complete, the van der Waals interaction parameters can be obtained.

 
image file: d5sc02715b-t12.tif(13)

Various methods can be used to adjust the force field parameters to minimize J(θ), with one of the most straightforward approaches being gradient descent. The process primarily involves two primary steps: first, an initial set of force field parameters is provided, which may be randomly assigned; second, the parameters are iteratively adjusted in the direction of the negative gradient until the loss function converges to a predefined value. As illustrated in Fig. 4c, the red areas represent higher values of J(θ), and as the parameters are updated, the value of J(θ) decreases, eventually reaching the deep blue regions. The core idea behind gradient descent is to move in the direction of the negative gradient from the current position. To further accelerate convergence, the method was further extended into the steepest descent method, where the step size for parameter adjustment reduces as the gradient diminishes. This leads to slower progress as the optimization nears the target value.

However, the final point reached by gradient descent may not always correspond to the global minimum, but rather to a local minimum, especially when dealing with force fields that involve a large number of parameters. To overcome this challenge, a common strategy is to initialize multiple sets of parameters and perform gradient descent along several different paths. The optimal parameters are then selected from these paths, allowing for a more robust approach to global optimization (Fig. 4c).

Atomic charge is a critical component of classical force field parameters, and various methods have been developed to fit atomic charges, including Merz-Kollman,83 CHELPG,84 and RESP.85 Among these, the RESP method, proposed by Kollman et al., effectively addresses issues such as conformational dependence, numerical instability, and atomic equivalence of internal atomic charges, making RESP charges widely used. The principle behind RESP involves iteratively adjusting atomic charges using the least squares method until the error between the calculated classical potential and the QM potential is minimized. To avoid unreasonable charge distributions, the RESP method introduces constraints, such as the neutral constraint (which ensures the sum of molecular charges is zero) and a penalty term that controls the charge distribution. Programs such as AmberTools86 and Multiwfn87 make it easy to fit RESP charges.

3.3 Applications of classical force fields

In this section, we highlight selected studies that utilize classical force field methods, focusing on their application in heterogeneous catalysis rather than providing a comprehensive literature review. Specifically, we explore how classical force fields are used to study catalyst interface properties, particularly in areas like metal particle morphology and molecular diffusion. These applications demonstrate the strengths of classical force fields in efficiently simulating the dynamics of heterogeneous interfaces.

The morphology of metal catalysts (specifically the size, shape, and distribution of metal particles) plays a crucial role in determining catalytic performance.20,88–91 The size and shape directly influence the number of active sites on the catalyst surface. For example, Hu et al. investigated the size effect on the atomic structure of amorphous systems originating from a Cu64Zr36 particle (containing 50–5000 atoms) using MD simulations with an embedded atom method (EAM) potential.92 Their findings show that particle size strongly impacts the local atomic structure, with Cu64Zr36 particles exhibiting core–shell structures. The shell component of the particle has a lower average coordination number, shorter bond lengths, higher ordering, and lower packing density compared to the core, due to Cu segregation on the shell. Given that metal nanoparticles often exhibit high surface energy, they tend to aggregate during catalytic processes, which can lead to catalyst deactivation. To mitigate this, metal nanoparticles are commonly dispersed on support surfaces to enhance stability.93 In a study by Wang et al., the effect of metal–support interactions on the stability of metal nanoparticles was examined using a Morse force field for parameterization.94,95 Their simulations revealed that the melting point of Pt nanoparticles supported on a substrate was significantly higher than that of unsupported nanoparticles, and the melting point increased as the metal–support interaction strengthened. In addition to improving stability, the support material also affects the morphology of the metal nanoparticles. The simulations indicated that increasing the metal–support interaction leads to a higher number of unsaturated coordination atoms in the nanoparticle. Notably, when Pt nanoparticles were supported on bare MXene, a film-like structure formed on the substrate surface. Wang et al. further explored this by tuning the nanoparticle structure through variations in the surface functional groups of MXene.96 Their results showed that when the surface functional groups on Nb2C MXene transitioned from –Cl, –Br, and –O to partial –O, the supported Pd561 nanoparticle exhibited a distinct morphological shift from 3D to 2D (Fig. 5a), consistent with electron microscopy observations (Fig. 5b). Initially, a monolayer of Pd preferentially formed on the exposed Nb sites, followed by the creation of a second Pd layer upon encountering oxygen functional groups, ultimately exposing the (111) facet. Based on these findings, they constructed Pd metalenes supported on Nb2C MXenes, which demonstrated efficient selective hydrogenation of phenylacetylene at room temperature.


image file: d5sc02715b-f5.tif
Fig. 5 (a) The relationship between the interaction energies of Pd and supports and the number of Pd layers in the MD simulations of Pd/MXenes catalysts. (b) Microstructure of Pd supported on functional group-modified Nb2C [reprinted with permission from ref. 94. Copyright 2023 Springer Nature]. (c) Enlarged images of the local surface of Cu-HEX and blank Cu sample. (d) The calculated number of adsorbed N2 and H2O molecules on blank Cu and Cu-HEX surfaces [reprinted with permission from ref. 98. Copyright 2022 Elsevier]. (e) Model of selective gas diffusion in graphene oxide (GO) membranes. (f) Selectivity of CO2 diffusion relative to other gas molecules for graphene and GO membranes [reprinted with permission from ref. 43. Copyright 2015 American Chemical Society].

Classical force fields are also commonly employed to simulate the dynamic behavior of molecular systems, including the diffusion, adsorption, and desorption processes of reactants, intermediates, and products, and their impact on catalyst performance.97–100 To improve the catalytic performance of N2 electroreduction (NRR) on metal surfaces, hexanethiol (HEX) was selected as a modifier to inhibit the competitive adsorption of water molecules in the hydrogen evolution reaction (HER). Chen et al. studied the effects of HEX on the diffusion and adsorption behaviors of H2O and N2 molecules on the Cu surface through MD simulations.101 The study involved 100 nitrogen molecules and 5000 water molecules, based on the solubility of N2 in water. The results revealed that on the HEX-modified Cu surface, N2 molecules could diffuse to the catalyst surface, while H2O molecules were blocked by the HEX layer (Fig. 5c). In contrast, the non-HEX Cu surface is predominantly covered by H2O molecules. After HEX modification, the number of N2 molecules adsorbed on the Cu surface increased significantly from 24 to 48, while the number of H2O molecules adsorbed decreased from 700 to 23 (Fig. 5d). Additionally, the calculated potential of mean force indicated that the presence of HEX notably raised the equilibrium position of H2O (from 0.18 nm to 0.54 nm). After HEX modification, the Faraday efficiency of various metal catalysts (Cu, Au, Pd, Pt, and Ni) prepared by experimental collaborators was significantly enhanced, confirming the hydrophobicity of HEX on the metal surface and its effective role in promoting N2 adsorption.

For the diffusion properties of molecules in porous materials, Xu et al. performed MD simulation on the gas diffusion in the interlayer gallery of graphene and graphene oxide (GO) membranes, and elucidated the mechanisms of gas selective separation (Fig. 5e). They found that both the layer spacing and the chemical modification of the membrane surface significantly influenced the selective gas penetration.43 For example, for GO membranes, the He/CO2 selectivity can reach as high as 30, compared to 4.5 for CH4/CO2 and between 2 and 3 for CO/CO2, N2/CO2, and O2/CO2. In contrast, for graphene layers, the selectivities for N2/CO2, CO/CO2, and O2/CO2 are relatively low, ranging from 1 to 3 (Fig. 5f). To further explore the correlation between pore properties and molecular diffusion coefficients, Wang et al. studied the interlayer diffusion of CO in graphene nanosilts with different Pt loading. They found that the diffusion energy barrier is related to the distribution of CO molecules in the system, which in turn is influenced by factors such as temperature, pressure, interlayer distance, and the properties of the supported metal. High temperature, low pressure, and fewer surface atoms were found to facilitate the diffusion of gas molecules. To quantify the relationship between the diffusion coefficient and the environmental and structural properties of the system, a generalized formula for confined diffusion of CO in the supported system is derived based on the simulation data.97 Further simulations of interlayer diffusion of CO2 were conducted to study the effects of the metal–molecule interaction on the adsorption and diffusion properties of molecules.44 The results showed that the difference in the adsorption amounts of CO and CO2 on Pt nanoparticles correlates with the adsorption site and exhibits a volcanic trend with respect to temperature. The metal–gas interaction and the surface atomic number of metal nanoparticles have great influence on the diffusion of gas molecules, especially at low temperatures. The simulation data further validate the accuracy and generality of the generalized formula.

Compared to the above classical force fields, the newly developed GFN-FF force field demonstrates superior generality and accuracy, approaching the precision of QM methods in certain cases. For instance, metal–organic frameworks (MOFs), widely used in heterogeneous catalysis, typically have few available force fields (such as UFF), which are often inadequate for describing conjugated systems and metal coordination. In contrast, GFN-FF enables accurate geometric optimization of MOF structures. For example, for a Goldberg polyhedra composed of 46 Pd2+ ions and 96 organic ligands, with a total of 3888 atoms, the RMSD of the heavy atoms in the GFN-FF optimized structure is only 0.75 Å, which is in excellent agreement with experimental results. Similar accuracy can be achieved for other metal–organic polyhedra and MOFs.59 Additionally, water, as the most common liquid, is notoriously difficult to describe accurately, leading to the development of various force field models, including rigid, flexible, and polarizable force fields. Molecular dynamics simulation of water using the GFN-FF force field shows that, while the calculated self–diffusion coefficient (2.06 × 10−5 cm2 s−1) is remarkably close to the experimental value (2.35 × 10−5 cm2 s−1), its density is significantly overestimated (1.23 g cm−3 compared to the experimental value).102 Therefore, while GFN-FF offers a fast and accurate classical force field, like all universal force fields, it may not always provide the most accurate results.

From the above cases, it is evident that classical force fields remain valuable tools for modeling heterogeneous catalysis, particularly when large system sizes and long simulation times are required. Their computational efficiency enables exploration of phenomena such as adsorbate diffusion, nanoparticle sintering, and solvent-mediated surface restructuring over length and time scales inaccessible to QM methods. In supported catalyst systems, classical force fields have been successfully applied to investigate nanoparticle morphology evolution, interfacial interactions between the support and active phase, and adsorption–desorption equilibria under realistic reaction conditions.

However, the inherent limitations of classical force fields arise from their fixed topologies, predefined interaction functional forms, and parameter sets that are typically tuned for narrow chemical spaces. These constraints hinder their ability to capture bond formation/breaking, charge transfer, and polarization effects, which are central to catalytic processes. Furthermore, the lack of explicit electronic degrees of freedom makes it challenging to describe surface reactions involving variable oxidation states, adsorbate-induced reconstruction, or metal-support charge redistribution. Transferability across different surfaces, phases, and chemical environments is often limited, necessitating reparameterization for each new system.

Future developments should aim to improve the accuracy and transferability of classical force fields for catalysis by integrating more flexible interaction terms (e.g., polarizable force fields), incorporating long-range electrostatics more rigorously, and systematically coupling classical descriptions with reactive or ML force fields in hybrid schemes. Such approaches could preserve the efficiency of classical force fields while extending their applicability to the complex, reactive, and dynamically evolving environments characteristic of heterogeneous catalysis.

4 Reactive force fields

4.1 Forms of reactive force fields

The reactive force field is typically based on bond order to construct the system's PES. In the early stage of development, its form was designed specifically for the bonding situation in a specific reaction system, making it difficult to generalize. For example, based on the relationship between Pauling bond length and bond order, Johnston developed the BEBO force field to study the reactive PES of the H + H2 system.103 Besides the BEBO force field, several other reactive force fields have been developed to address bond formation and breaking in specific systems. The AIREBO (Adaptive Intermolecular Reactive Empirical Bond Order) potential,104 an extension of the REBO model,105 is designed for hydrocarbons and carbon-based materials and has been widely used in simulations involving graphene, CNTs, and organic reactions. COMB3 (Charge-Optimized Many-Body) potentials106 enable the simulation of metal–ceramic interfaces and allow for dynamic charge transfer using a charge equilibration scheme, making them suitable for oxide-based catalysts and interfacial systems. The Environment-Dependent Interatomic Potential (EDIP)107 is tailored for covalent materials, particularly silicon, and can model defect formation and surface reconstructions with relatively low computational cost. While these models have not yet seen widespread application in complex catalytic reactions, they offer valuable tools for studying specific materials and interfaces under reactive conditions.

Currently, ReaxFF, developed by van Duin et al. in 2001,56 is the most widely used reactive force field due to its parameter transferability. The initial version of ReaxFF (2001) was focused on hydrocarbons and used the same dissociation energy for C–C single, double, and triple bonds, which worked well for hydrocarbons but was limited for more complex systems. In 2003, the ReaxFF functional form was extended to systems containing Si, O, and H, with separate parameters for single, double, and triple bond dissociation energies. This extension also introduced a lone-pair energy term to handle the formation and dissociation of oxygen lone pairs. The 2003 extension was further improved to handle more complex group chemistries, such as the conjugation term for –NO2 group chemistry in nitramines and a triple-bond stabilization term for better describing terminal triple bonds.108 By 2005, the ReaxFF functional form stabilized,109 and the general form now includes terms for:

 
Esystem = Ebond + Eover + Eunder + Eval + Etors + Econj + Evdw + ECoulomb + EH–bond + Erest(14)
where the terms of bonding interactions include bond, valence angle, lone pair, conjugation, and torsion angle, and the terms of nonbonding interactions include van der Waals, Coulomb interaction, and hydrogen bond. In addition, to deal with special systems, additional terms will be introduced into the formula, such as angle bending terms for Mg–Mg–H zero-degree angles110 and double-well angular terms for aqueous transition metal ions.111

As mentioned above, the ReaxFF is categorized into bond-order-dependent and bond-order-independent contributions. Bond order (BO) a function of interatomic distance rij, which is divided into the contribution of single bond (BOσij), double bond (BOπij), and triple bond (BOππij):

 
image file: d5sc02715b-t13.tif(15)

Based on the bond order formula, interactions such as bond and valence angle terms can be derived through a series of transformations; for details, see the work of van Duin et al.109 Additionally, this bond order formula accounts for long-distance covalent interactions in the transition state, enabling the ReaxFF to accurately predict reaction barriers.

For the Coulomb interaction, the QEq method is used to calculate and adjust the charge distribution of each atom in the system.66 This method achieves charge distribution through electronegativity equilibration, accounting for both atomic interactions and the dynamic changes occurring during chemical reactions. During a reaction, the electronegativity of each atom varies as its local environment changes. Atoms with higher electronegativity tend to attract electrons, while those with lower electronegativity tend to lose electrons.

Initially, the ReaxFF parameters were developed exclusively for organic systems.56 Subsequently, the parameters gradually incorporated metallic elements110,112,113 and other non-metallic elements,108,114 further expanding the applicability of ReaxFF to a wider range of systems. Currently, the ReaxFF method has been successfully applied to simulate various reaction dynamics, including hydrocarbon organic small molecule systems,56 polymer systems,115,116 high energy material systems,117,118 metal oxide systems119,120 and transition metal catalyst systems.113,121 Rapid reaction processes, such as explosion and combustion, can also be simulated using ReaxFF parameters.117,118,122 Additionally, the ReaxFF-based Monte Carlo reaction kinetics method has been employed to investigate experimental structures that are difficult to resolve experimentally,119,120 catalytic reactions in fuel cell electrode materials,123,124 and catalytic processes in porous materials.114,125 To date, ReaxFF parameters for various elements in the periodic table have been developed, as shown in Fig. 6a.30 However, in the specific study, the transferability of these parameters is limited, and it is impossible to simply combine these parameters to get satisfactory results. Depending on the different O/H atomic and bond parameters, there are currently two main categories of ReaxFF parameter sets that are intra-transferable with one another: the combustion group and the aqueous group. Additionally, there are several independent groups whose parameters often require more extensive refitting. For specific details on grouping, refer to the summary by Thomas et al.30


image file: d5sc02715b-f6.tif
Fig. 6 (a) Elements described in available ReaxFF parameter sets. (b) General flowchart of ReaxFF parameters fitted using a genetic algorithm. (c) Software architecture of GARFfield [reprinted with permission from ref. 125. Copyright 2014 American Chemical Society].

4.2 Fitting methods of ReaxFF parameters

Compared to the classical force field, the ReaxFF function is more complex and has a greater number of parameters, making it more challenging to develop. During the development of ReaxFF, it is often not possible to fit all parameters. Some validated force field parameters can be obtained from existing parameter libraries or theoretical literature, and relevant parameters can be selected for fitting based on the system under study. Two main methods are commonly used for the fitting process: successive one-parameter search and genetic algorithm.

Successive one-parameter search is the earliest method used for ReaxFF parameter fitting.126 The method assumes a parabolic relationship between a single parameter value and the total fitting error, determining the optimal parameter value by calculating the total error at three distinct parameter values. However, because most parameters in ReaxFF are interdependent, a change in one parameter will cause the optimal values of others to shift. Therefore, the parameters must be optimized iteratively to ensure that the fitting error is minimized. This method offers certain advantages for fitting processes with many parameters, as the fitting process can be interrupted if a parameter appears to have an unrealistic value. However, its fitting efficiency is relatively low.

For ReaxFF with multiple parameters, in addition to the fitting efficiency issue, the fitting process is a high-dimensional, non-separable optimization problem with multiple minima. As a result, the fitting results often converge to local optima. While deterministic global optimization techniques are available, they encounter significant practical challenges in high-dimensional search spaces and with computationally expensive objective functions, both of which are typical of the present problem. Viable alternatives include nondeterministic search heuristics, such as genetic algorithms (GA).127

GA are optimization techniques inspired by the process of natural evolution. The two primary concepts underlying genetic algorithms are natural selection and genetic dynamics. Natural selection involves choosing individuals with greater fitness to pass their traits to the next generation, based on superior performance. The goal of the genetic algorithm is to maximize fitness as the generations progress. A simplified depiction of the genetic algorithm, illustrating the key steps, is shown in Fig. 6b. First, based on parameters from the literature, multiple sets of parameters are generated through mutation operations to form the initial population, with each set representing an individual. The fitness function is then used to assess the quality of each individual, typically defined by the difference between the system energy calculated from the parameters and the system energy obtained from QM calculations. Next, a selection operation is applied to the current population, with methods such as rank-based selection used to retain individuals with higher fitness. These selected high-fitness individuals undergo crossover operations simulating genetic recombination to produce offspring. To enhance population diversity and prevent premature convergence to a local optimum, mutation operations are employed to randomly alter specific force field parameters, thereby generating new individuals. By applying selection, crossover, and mutation operations, a new generation is created to replace the old one, followed by a fitness evaluation of the new population. After each generation, a check is made to determine whether the termination criteria have been met. Common termination criteria include reaching the maximum number of iterations or achieving a predefined fitness threshold. If the termination criteria are satisfied, the individual with the best fitness in the current population is selected as the final solution.

To enable efficient and rapid parallel optimization of parameters, Goddard et al. developed GARFfield (genetic algorithm-based reactive force field optimizer method), a hybrid multi-objective Pareto-optimal parameter development scheme combining genetic algorithms, hill-climbing routines and conjugate-gradient minimization.128 The software architecture is illustrated in Fig. 6c. The core of the GA search mechanism is stochastic gradient search, which requires the evaluation of the objective function's fitness for each generation. A drawback of this method is the significant time cost associated with the repetitive search and fitness evaluation. To address this, a hybrid algorithm combining artificial neural networks (ANN) and GA has been proposed and accepted for fitting the ReaxFF parameters.129,130 The ANN read and analyze the data and total error values generated during the GA process, thereby enhancing the efficiency and reducing the time cost of the heuristic search.

4.3 Applications of ReaxFF

Modeling catalytic processes requires consideration of bond breaking and formation, yet the time and length scales inherent in nanoscale interface reactions cannot be addressed with QM. The bond order concept, coupled with the low computational cost, enables ReaxFF to bridge the gap between QM and non-reactive force fields. In this context, we focus on the application of ReaxFF to metal catalysts and their oxides to highlight the methodological strength of ReaxFF: modeling reactive chemistry at heterogeneous interfaces, rather than providing a comprehensive review of the literature.

Transition metals are widely used in heterogeneous catalysis reactions due to their unique electronic structures, with their geometric structures closely influencing their properties.131–134 The simulation scale achieved by ReaxFF is well-suited for modeling catalytic processes on metal surfaces and clusters, particularly at defect sites or unsaturated sites. Iron-based catalysts play a significant role in the water–gas shift and Fischer–Tropsch synthesis reactions. Based on ReaxFF fitted by genetic algorithm, Wen et al. studied the structure–activity relationship of Fe nanoparticles in CO activation.135 The results showed that CO dissociation can be effectively promoted by introducing line dislocation and vacancies on Fe nanoparticles. Furthermore, four mechanisms of CO2 formation catalyzed by Fe nanoparticles were analyzed through adsorption and activation of surface carbon in MD simulation trajectories (Fig. 7a–d). They found that, at the initial stage of the reaction, CO molecules adsorb on the surface of Fe nanoparticles and dissociate. At this point, the oxygen concentration on the surface is low, and the probability of CO2 formation via the Langmuir–Hinshelwood (LH) mechanism (where adsorbed O atoms react with gaseous CO molecules) is minimal. However, some CO molecules adsorbed on the Fe surface without dissociation are more likely to react with gaseous CO molecules, forming CO2 through the Eley–Rideal (ER) mechanism. As the reaction progresses, O atoms accumulate on the surface of the nanoparticles, and the surface also contains undissociated CO molecules, which can form CO2 through the LH mechanism. Additionally, adsorbed undissociated CO molecules can also react with each other to form CO2. According to the above analysis, it can be concluded that both the ER and LH mechanisms play crucial roles at different stages of CO2 formation catalyzed by Fe, with the ER mechanism primarily dominating in the early reaction stages, while the LH mechanism becomes more prominent in the later stages. For metal-catalyzed carbon nanotube (CNT) growth, the process involves the dissolution and migration of carbon atoms within the metal. CNT growth is initiated by the adsorption and dissociation of hydrocarbons on the metal surface. To investigate the factors influencing this process, Mueller et al..developed a ReaxFF reactive force field to describe the interaction between hydrocarbons and the Ni surface.121 They found that surface defects play a key role in the decomposition rate of CH3, particularly in the final step, where CH decomposes into C and H. Furthermore, due to the transferability of ReaxFF parameters, the Ni/C/H parameter set was utilized by Neyts et al. to elucidate the influence of ion bombardment on CNT formation.136 The simulation results showed that the energy of the impacting ion can be adjusted to break low-coordination C–C bonds, leading to the formation of new bonds in the network, which facilitates guided growth.


image file: d5sc02715b-f7.tif
Fig. 7 (a) E–R mechanism 1: adsorption of a molecule leading to CO2 formation. (b) E–R mechanism 2: adsorption of an atom leading to CO2 formation. (c) L–H mechanism 1: adsorption of two molecules leading to CO2 formation. (d) L–H mechanism 2: adsorption of one atom and one molecule leading to CO2 formation [reprinted with permission from ref. 128. Copyright 2019 Elsevier]. (e) Reactive MD simulation of methane light-off over an embedded PdOx cluster. (f) Simulated methane light-off curves comparing supported and embedded cluster models. (g) Reactive MD snapshot at methane activation light-off (pink arrow) over the embedded PdOx cluster model in panel (e) [reprinted with permission from ref. 131. Copyright 2016 American Chemical Society].

The ReaxFF formula combines bond order and charge transfer formalisms, making it particularly well-suited for describing the evolution of the reaction process in metal oxide materials, which exhibit both covalent and ionic interactions. For example, Chenoweth et al. developed V/O/C/H ReaxFF parameters aimed at describing the interaction of hydrocarbons with vanadium oxide.119,120,137 Based on the MD simulation with this ReaxFF parameter set, they investigated the dissociation process of methanol on the surface of V2O5. On a defect-free surface, C–H dissociates more easily than O–H. On the defect surface, the oxygen atom in methanol binds to the reduced vanadium defect site, lowering the energy barrier of O–H dissociation. Additionally, they observed the desorption of a water molecule formed on the hydroxylated V surface, which leads to interlayer bonding between two metal atoms of different oxidation states, transitioning from VV and VIII sites to two VIV sites. Metal oxide materials typically exhibit partial, mixed, and irregular metal occupations at various crystallographic sites. To obtain the lowest-energy structure, researchers employ the combined Monte Carlo method to explore possible metal oxide configurations. For example, Janik et al. used ReaxFF-based Grand Canonical Monte Carlo (GCMC) simulations to construct thermodynamically stable Pd/CeO2 surface models under reaction conditions.138 Based on the equilibrium configuration, they further simulated methane activation using reactive MD to evaluate the catalytic performance of different interface morphologies. By counting the number of reactive species, it was found that a sharp decrease in the number of gaseous CH4 molecules was accompanied by an increase in adsorbed H atoms at approximately 1440 K, indicating that methane was activated (Fig. 7e). By comparing methane activation on Pd and PdOx clusters that are both supported on and embedded in the CeO2 lattice (Fig. 7f), they found that the supported metal clusters facilitate methane activation more efficiently, with C–H bond breaking occurring at the unsaturated coordination sites on the cluster edges. But in the embedded model, methane activation is slower because CH4 has limited access to these sites. In contrast, PdOx clusters in the embedded structure are more likely to activate methane. This can be attributed to the reduced exposure of edge sites in the embedded clusters, while PdOx/CeO2 interface mixing generates unique active sites (Fig. 7g). These studies demonstrate the utility of ReaxFF in modeling catalytic processes on metal and metal oxides, as it can simulate large-scale reaction processes that are not achievable through QM.

In summary, reactive force fields such as ReaxFF, provide a powerful framework for simulating bond-breaking and bond-forming events at scales relevant to heterogeneous catalysis. They have been widely applied to investigate surface reaction mechanisms, catalyst activation and deactivation, support-metal interfacial chemistry, and degradation pathways under realistic temperature and pressure conditions. By explicitly accounting for variable bond orders and dynamic charge equilibration, reactive force fields enable the exploration of catalytic cycles beyond the reach of classical force fields, while maintaining computational tractability for systems containing thousands of atoms.

Despite these strengths, several limitations hinder their predictive reliability. The accuracy of reactive force fields strongly depends on the breadth and quality of the parameterization dataset, which must adequately sample relevant reactive configurations, surface reconstructions, and intermediates. Transferability remains a challenge as parameters optimized for one catalytic material or reaction type may not generalize to others without significant re-fitting. Current charge equilibration schemes (e.g., QEq) often fail to capture non-local charge transfer, polarization under strong electric fields, or complex redox processes, limiting their applicability in electrocatalysis and photocatalysis. Furthermore, the high dimensionality of parameter space makes systematic optimization difficult, and the lack of rigorous error estimation complicates the assessment of model reliability.

Future directions for reactive force fields in catalysis include the development of next-generation reactive potentials with improved electrostatic and polarization models, the incorporation of machine learning-assisted parameter optimization to accelerate and improve transferability, and the construction of large, diverse, and surface-specific training datasets. Hybrid simulation schemes that couple reactive force fields with on-the-fly QM calculations or ML force fields may offer a promising strategy to combine chemical reactivity with long-timescale dynamics, thereby bridging the accuracy-efficiency gap for realistic catalytic environments.

5 Machine learning force fields

5.1 Forms of machine learning force fields

The application of ML methods in the construction of PESs dates back to 1992, when Bobby et al. innovatively introduced neural networks to model and predict the energy of molecular systems, thereby describing the behavior of polymer molecules in vibrational modes.139 Subsequently, in 1995, Blank et al. used neural networks to approximate the PES for hydrogen formation reactions on silicon surfaces.57 This work marked a significant milestone in PES modeling and paved the way for the application of ML methods to model complex chemical reactions.140–142

Unlike classical force fields and reactive force fields, which rely on specific formulae, ML force fields do not have a predefined functional form. Instead, ML force fields use atomic local environments as descriptors and are trained using ML algorithms, such as neural networks or Gaussian regression, to construct the PES of a system. By moving away from intuitive, physics-based expressions, ML force fields can offer a more accurate PES.

Designing appropriate descriptors to represent the local environment of atoms is crucial for the effectiveness of ML force fields. The local chemical environment refers to the interactions between an atom and its neighboring atoms within a specified cutoff radius rcut, as illustrated in Fig. 8a. Behler et al. introduced the concept of locality approximation,33,143 which asserts that atomic interactions are primarily determined by the local chemical environment of the central atom, rather than by the influences of all other atoms in the system. This approximation reduces computational costs while maintaining the model's transferability.


image file: d5sc02715b-f8.tif
Fig. 8 (a) Workflow for fitting a force field using machine learning algorithms. (b) Framework of the neural network algorithm used in force field construction. (c) Timeline of the development of machine learning force field models.

Building upon the locality approximation, the ML force field further assumes that the total potential energy E of the system can be expressed as the sum of the energies Êi of individual central atoms, as shown in eqn (16). The energy Êi is determined by the local chemical environment of atom i.

 
image file: d5sc02715b-t14.tif(16)

In a neural network force field, each atom is represented by a neural network (Fig. 8b), with the input corresponding to the information about its local chemical environment and the output representing the atomic energy term Êi. The total potential energy E of the system is obtained by summing the outputs of the neural networks for all atoms. This approach constructs the model at the atomic level rather than for the entire system, ensuring its scalability. It is important to emphasize that the atomic potential energy Êi is not a predefined label but rather an intermediate variable introduced based on the potential energy decomposition assumption. The total potential energy E is the direct target of model training.

Although the types and coordinates of atoms in the system can fully describe the PES, directly using these as inputs to the ML force fields is not appropriate. On the one hand, atoms do not have absolute coordinates, and the overall translation and rotation of the system, while altering the coordinates of individual atoms, do not affect the system's potential energy. On the other hand, regarding atom types, exchanging positions of identical atoms in the system has no effect on its properties, meaning that changing the atomic index order does not influence the system's behavior. Therefore, it is necessary to transform the atomic coordinates and type information into suitable descriptors for ML, which can then be used as inputs to the ML force field model.

Therefore, the descriptors should adhere to the principle of symmetry, meaning they must be invariant under translation, rotation, and permutation of atomic indices. More precisely, the descriptors should have a bijective relationship with the atomic structure and type information, ensuring that for any given system configuration, there is a unique corresponding descriptor, and different structures have distinct descriptors. In addition to fulfilling these requirements, a good descriptor should also exhibit continuity, low computational cost, and high representational efficiency. Several methods for constructing descriptors have been proposed, including the Smooth Overlap of Atomic Positions (SOAP),144 Coulomb Matrix (CM),145 Atom-Centered Symmetry Functions (ACSF),143 Spherical Harmonics,146 Many-Body Tensor Representation,147 Bispectrum features,148,149 and others.

In addition to atomic environment descriptors, another crucial aspect of the ML force fields is the ML algorithms themselves. With advancements in ML techniques and the widespread use of GPU computing, numerous ML force field models have been developed, as shown in Fig. 8c. Early ML force fields primarily relied on traditional ML techniques, such as Gaussian Approximation Potentials (GAP)148,149 and fully connected neural network architectures like Atom-Centered Symmetry Functions (ACSF).143 As algorithms have advanced, more sophisticated ML force fields have emerged, including GDML,61 ANI,150,151 DPMD46 and others.152–158 Recently, ML force fields based on graph neural network (GNN) architectures have gained traction, with models like NequIP62 and Allegro.63 These models do not require explicit descriptor construction; instead, they represent the entire system as a graph and incorporate the effects of the chemical environment through a message-passing mechanism. Given the wide variety of ML force fields, this review focuses on introducing the more widely used neural network force field. This emphasis does not imply that other models are less effective. For example, GAP based on Gaussian process regression, sGDML rooted in nuclear ridge regression, and other linear and kernel-based methods such as Moment Tensor Potential (MTP)159 and Atomic Cluster Expansion (ACE),160 offer interpretable and systematically improvable representations of the PES. These approaches typically require less training data than deep neural networks and permit rigorous error estimation, making them particularly appealing for catalytic studies where generating high-fidelity training datasets is computationally expensive.

5.2 Fitting methods of neural network force fields

Neural networks are mathematical models that mimic the structure and functioning of neurons in the human brain. Typically, these models consist of an input layer, hidden layers, and an output layer, with each hidden layer containing a set of interconnected neurons. Fig. 8b illustrates a simple neural network model, where each input is progressively transformed through neurons in the hidden layers before reaching the output. The conversion between neurons is achieved by the following formula:
 
image file: d5sc02715b-t15.tif(17)
where x represents the input from the preceding layer, ω denotes the weights, b is the bias term, and f signifies the activation function.

In the neural network force field, the input consists of the localized chemical environment information of each atom, and the corresponding atomic energy is obtained through the mapping between neurons, as shown in eqn (18)–(20).

 
image file: d5sc02715b-t16.tif(18)
 
image file: d5sc02715b-t17.tif(19)
 
image file: d5sc02715b-t18.tif(20)

A notable approximation in this method is the restriction of atomic interactions within a cutoff sphere. The resulting short-range potential is well-suited for describing local bonding, even in complex atomic environments. However, for many systems, long-range interactions (e.g., electrostatic and dispersion forces) are also important. To address this issue, like the environment-dependent atomic energies, the atomic charges also depend on vectors of ACSFs that describe the atomic environments, and are obtained as outputs of the atomic charge neural networks. These charges are then used to calculate the long-range electrostatic energy using standard methods, such as Coulomb's law or the Ewald summation. PhysNet69 is a prototypical example of this class of methods. Although this method incorporates electrostatic interaction calculations, it relies on atomic charges determined solely by the local chemical environment, which limits its accuracy in describing non-local effects. To overcome this limitation, methods such as the charge equilibration neural network technique (CENT)161 have been developed to determine atomic charges based on the global environment, effectively addressing non-local interaction challenges. This method also uses ACSF vectors to describe the atomic environment as input to the neural network. The key difference lies in its output, which is the electronegativity values that are subsequently used in a charge equilibration scheme to determine atomic charges.

Compared to traditional neural network force fields that rely on manually designed fixed descriptors, the graph neural network force fields developed recently demonstrate superior generalization capabilities and dynamically adapt to varying topological structures. NequIP exemplifies this approach by employing an E(3)-equivariant architecture to accurately model atomic interactions.62 In this framework, atoms serve as nodes, while edges represent interactions between a central atom and its neighbors within a cutoff radius, forming an atomic graph. NequIP operates directly on these graphs, with features that transform equivariantly under three-dimensional rotations, translations, and reflections. This geometric equivariance allows the network to inherently respect physical symmetries, enhancing data efficiency and prediction accuracy. Through multiple layers of equivariant message passing that update node and edge features, NequIP effectively captures complex many-body interactions and anisotropic effects essential for realistic molecular and materials simulations. Similar to NequIP, methods such as MACE65 and Allegro63 are also graph-based architectures that model interatomic interactions through message passing schemes, enabling the explicit treatment of many-body effects.

During the training process of a neural network, the weights Wi and biases bi are initialized randomly and optimized to minimize a loss function that quantifies the difference between the predicted values and the reference data. The primary focus is typically on two indicators: the system's potential energy and the atomic forces. Commonly used loss functions include the MAE and RMSE. The smaller the MAE or RMSE on the test set, the more accurate the model's predictions.

In some cases, additional terms are included based on the system's properties; for example, for solid material systems, a stress-related loss function term may be added. In addition to the weights Wi and biases bi that are determined during training for a given data set, model training also involves selecting hyperparameters, such as the number of hidden layers and neurons per layer. In theory, if the number of hidden layers and neurons per layer is sufficiently large, the model can output energy with arbitrary precision. However, too many hidden layers and neurons can significantly increase the computational cost. Therefore, selecting appropriate hyperparameters is crucial for the accuracy and efficiency of the model. Hyperparameters are typically optimized using exhaustive search schemes like grid search or random search, often combined with informed guesses for suitable search ranges. Currently, for many hyperparameters, model performance remains fairly robust for small changes, and defaults perform well across different data sets.

Before optimizing any hyperparameters, the test set must be separated from the available reference data, and the remaining data is then divided into training sets and validation sets. This separation is essential because, in force field applications, the structures are not included in the training data, so the model must be trained to be capable of predicting unseen data. Therefore, for each trial combination of hyperparameters, the model is trained on the training data, and its performance is evaluated on the validation set to estimate the generalization error. The best-performing model is then selected. To enhance the model's generalization ability, a k-fold cross-validation method is also used. For more complex neural network models, additional considerations are required, and further details can be found in the summary by Tokita et al.162

Training ML force fields often relies on large amounts of high-quality data, which typically demands expensive QM calculations. To reduce training costs, several methods for optimizing data acquisition have been developed. One approach is to generate initial data using less expensive classical molecular dynamics simulations, followed by selecting a minimal subset for more computationally expensive ab initio calculations.163 Additionally, strategies such as active learning and enhanced sampling can further improve efficiency.164 Active learning significantly reduces data requirements by intelligently selecting the most informative data points for labeling, while enhanced sampling techniques (e.g., meta-dynamics, umbrella sampling) can effectively capture rare events and improve the coverage of reaction paths. By combining these methods, data-efficient ML force fields can be developed, enabling construction of high-precision PESs with limited data conditions.

The ML force field models typically rely on local structural information, such as interatomic distances and angles, to predict the PES. However, the traditional model tends to overlook long-range electrostatic interactions between charged particles in the system. As a result, traditional models based on local structural features still faces challenges when studying systems (such as clusters, interfaces, and gas-phase systems) that are significantly influenced by long-range electrostatic interactions. To address this, researchers have incorporated a long-range electrostatic interaction term into the traditional model, learning the hidden variables in the local atom descriptor and applying Ewald summation to account for these interactions.165–167 This approach has been successfully validated in systems such as charged polar molecules and biological macromolecules. Additionally, another challenge with ML force fields is that the system's structural features can change under different environmental conditions. For example, variations in coordination atoms, temperature, pressure, or solvents can significantly affect the structure and interactions of molecules, making it challenging for a model trained under one set of conditions to transfer efficiently to another. In contrast to the parameter transferability observed in classical force fields and the ReaxFF force field for similar systems, ML force fields, particularly those based on neural networks, generally lack parameter transferability. As a result, the model needs to be retrained for new systems.

To improve learning efficiency and accuracy on new systems or tasks while reducing reliance on expensive high-quality training data, transfer learning strategies have proven effective. This process mainly includes pretraining followed by fine-tuning, incremental learning, and active learning–assisted transfer. Typically, foundation models, such as MACE-MP-0,168 are first pretrained on large-scale general datasets, like the open catalyst project, to capture universal physicochemical features. They are then fine-tuned to rapidly adapt to specific systems, thereby reducing the demand for high-quality training data. Additionally, incremental learning enables continuous model updates as new data become available, preventing the forgetting of previously learned knowledge. Active learning incorporates uncertainty evaluation to selectively sample high-value data, enhancing training efficiency.169 These strategies, when combined, effectively facilitate the rapid deployment and efficient application of ML force fields across diverse catalytic and materials systems.

5.3 Applications of machine learning force fields

The advantage of ML force fields lies in their ability to achieve accuracy comparable to that of QM, while being significantly faster than QM methods. These simulations have been widely applied to a variety of systems, including molecular clusters,170 solid materials,171,172 solutions,173,174 and biomolecules.69 Additionally, ML force fields are particularly well-suited for simulating reactive dynamic processes. To illustrate the potential of ML force fields in catalysis, this paper will focus on three primary application cases: the stability of supported metals, catalytic reaction processes, and material structure prediction.

The surface energy of metal nanoparticles in the heterogeneous catalytic reaction is high, making sintering more likely, which subsequently leads to catalyst deactivation. A common solution is to disperse the nanoparticles on the surface of a substrate. The construction of PESs by neural networks offers comparable accuracy to QM but is several orders of magnitude faster, making it suitable for long-term simulations of nanoparticle sintering processes. To explore the influence of substrate on sintering behavior, Li et al. simulated the agglomeration process of gold nanoparticles on silica and ceria surfaces using neural network-based deep potential molecular dynamics (DPMD).175 The DPMD simulation results (Fig. 9a) showed that small gold nanoparticles are more likely to migrate on the surface of silica and rapidly merge with large nanoparticles. Similarly, on the flat CeO2−x (111) surface, small gold nanoparticles also migrate and agglomerate. In contrast, a gold nanoparticle at the step site of CeO2−x (111) is highly stable, with no significant migration during the simulation period. Further analysis revealed that the migration of the small particles is affected by the interaction between the metal and the support. Similar to hydrophilic behavior, gold nanoparticles on ceria exhibit smaller contact angles compared to silica, especially on the step site of the surface. These simulation results were also verified by experimental characterization. The rapid migration of gold nanoparticles on the surface of silica was observed using sequential high-resolution TEM (HRTEM), where all nanoparticles merged into a single particle. Conversely, Au nanoparticles on the surface of ceria were stabilized at the step site.


image file: d5sc02715b-f9.tif
Fig. 9 (a) Snapshot images from DPMD simulations of Au128 + Au32 + Au32/SiO2 at 500 ps and Au128 + Au32 + Au32/CeO2−x-step at 10 ns. (b) Time-resolved HRTEM images showing the Au/CeO2(111) image at 120 s and Au/SiO2 at 416 s [reprinted with permission from ref. 168. Copyright 2022 American Chemical Society]. (c) Evolution of atomic oxygen positions along representative trajectories of post-dissociation dynamics of O2 on Pd(111) and Pd(100) at 160 K, leading to different equilibrated distances [reprinted with permission from ref. 169. Copyright 2023 American Chemical Society]. (d) Ternary Zn–Cr–O phase diagram. The green region indicates compositions with the spinel-type skeleton structure as the global minimum; the blue circles labelled by numbers represent the composition. Only the spinel ZnCrO phases in the red dashed triangle are thermodynamically favored (see f). (e) Structure motifs of the spinel ZnCr2O4 and Zn3Cr3O8 bulk phases. (f) Convex hulls for all the ZnCrO structures indicated by the blue line. The blue triangles and black circles represent structures with negative and positive formation energy relative to the ZnO and CrO2 phases, respectively [reprinted with permission from ref. 171. Copyright 2019 Springer Nature].

The precision and speed with which PESs are constructed using neural networks enable the simulation of more MD trajectories for the reaction process. For example, Jiang et al. investigated the equilibration dynamics of hot oxygen atoms following the dissociation of O2 on Pd(100) and Pd(111) surfaces using MD simulations based on a scalable neural network force field.176 By analyzing hundreds of trajectories, they found that, on both surfaces, oxygen atoms produced by oxygen dissociation tend to neighboring sites and perform a random-walk-type motion. This mechanism results in a finite distance distribution of equilibrium atoms, which is consistent with experimental observations. And the initial molecular orientation and surface thermal fluctuations have significant effects on the overall dissociation kinetics. Similarly, a study on the decomposition dynamics of formate (HCO2) on Cu surfaces demonstrated the application of the neural network force field in catalytic processes.177 Based on globally accurate high-dimensional PESs fitted with density functional theory data, the study predicted the mean translational energy distribution and angular distribution of desorbed CO2 on Cu(111) and Cu(100) surfaces. Additionally, the decomposition of HCOOH on different Cu surfaces is structurally sensitive due to different surface repulsions. These studies represent a significant advancement in modeling surface reactions using ML force fields, providing a more comprehensive understanding of the state of reactive species on catalyst surfaces and improving theory-experiment agreement.

ML force fields can also be used to predict the composition and structure of materials. For example, the thermodynamic phase diagram of metal oxide alloys remains largely unknown due to their compositive variability and associated atomic structural complexity, as does the catalytic kinetics on different surfaces of different compositions. By combining a global neural network potential with stochastic surface walking global optimization, Liu et al. investigated the relationship between the structure of ternary zinc-chromium oxide (ZnCrO) catalysts and the performance of syngas (CO/H2) conversion.178 They explored the PES structure under different components (ZnxCryO) and constructed the ternary Zn–Cr–O phase diagram (Fig. 9d). Further calculation of the formation energy revealed the presence of a small, stable composition island in the phase diagram, where the oxide alloy crystallizes primarily in a spinel phase (Fig. 9e and f). Two representative crystal phases (ZnCr2O4 and Zn3Cr3O8) were selected for further analysis of the syngas conversion mechanism. Reaction kinetics results indicated that a planar [CrO4] site, dynamically formed under reaction conditions, is the active center for methanol production. This planar [CrO4] is only present when Zn[thin space (1/6-em)]:[thin space (1/6-em)]Cr exceeds 1[thin space (1/6-em)]:[thin space (1/6-em)]2, after the appearance of [ZnO6], demonstrating that ZnCrO catalysis is highly sensitive to the Zn[thin space (1/6-em)]:[thin space (1/6-em)]Cr ratios. This work highlights the role of ML force fields in predicting the composition and structure of materials, providing valuable insights into the connection between atomic structure and its properties.

Overall, ML force fields have rapidly emerged as a transformative tool for heterogeneous catalysis, offering near-quantum accuracy at a lower computational cost. By learning from large and diverse QM datasets, ML force fields can capture complex chemical environments, including surface reconstructions, multi-element active sites, adsorbate-induced electronic effects, and reaction pathways, while enabling simulations at scales inaccessible to direct QM methods. Recent advances in equivariant graph neural networks (e.g., MACE, NequIP, GemNet) and databases (e.g., MP, OCP) have further improved data efficiency, accuracy, and transferability across different catalyst surfaces and reaction intermediates. Notably, MACE-OFF179 represents a significant advancement by extending the MACE architecture to a large-scale pretraining paradigm, analogous to foundation models in natural language processing. MACE-OFF is trained on millions of atomic environments spanning diverse elements, bonding motifs, and structural phases. By leveraging this pretraining, the model can be adapted to new catalytic systems with minimal fine-tuning or, in some cases, without any additional retraining. And MACE-OFF has demonstrated near-DFT accuracy across a range of downstream tasks, including molecular dynamics, adsorption energy predictions, and surface reaction energetics, with orders-of-magnitude lower data requirements compared to conventional task-specific ML force fields. Moreover, SO3LR180 further pushes the frontier by combining SO(3)-equivariance with explicit long-range interaction modeling. This architecture allows the model to capture electrostatics, polarization, and other nonlocal effects that are critical in catalytic environments but often neglected in standard ML force fields. Unlike most equivariant graph neural networks, which focus primarily on short-range many-body interactions, SO3LR incorporates efficient formulations of long-range physics directly into the network, enabling accurate treatment of charged surfaces, polar adsorbates, and extended catalytic interfaces. Importantly, SO3LR achieves these capabilities while remaining computationally scalable to large systems, positioning it as a promising candidate for modeling realistic catalytic reactors or electrochemical interfaces.

Despite their promise, ML force fields face significant challenges. Their predictive power is fundamentally limited by the quality, diversity, and representativeness of the training data, making out-of-distribution generalization a key bottleneck. Transferability across phases, surface terminations, and adsorbate coverages often requires retraining or fine-tuning, and constructing high-quality datasets for reactive, charged, or open-shell catalytic systems can be computationally prohibitive. Moreover, most current ML force fields lack explicit treatment of long-range electrostatics, charge transfer, and excited-state effects, which are critical in electrocatalysis, photocatalysis, and plasmon-assisted catalysis. Benchmarking across relevant catalyst classes and establishing rigorous uncertainty quantification protocols remain underdeveloped.

Looking forward, the integration of active learning, transfer learning, and foundation models (e.g., those developed in the OCP) offers a promising route to enhance data efficiency and generalization. Hybrid simulations that couple ML force fields with reactive force fields or on-the-fly quantum refinement can provide a balanced description of reactivity and long-timescale dynamics. Furthermore, expanding benchmark datasets to include realistic catalytic conditions such as solvent effects, applied potentials, and high coverage regimes will be essential to closing the gap between ML force field predictions and experimental observables, thereby enabling their widespread adoption in predictive catalyst design.

6 Summary and outlook

In summary, this review provides an introduction to the principles of force field-based PESs, highlighting three main types of force fields, their corresponding fitting methods and application. The evolution of force field-based PESs reflects a progression from simplified models to more precise, comprehensive, and adaptable approaches. Furthermore, this review offers a detailed comparison of the strengths and limitations of each type, alongside a discussion of their applicability to different systems, with a particular focus on heterogeneous catalysis. These advancements encompass studies on supported metal nanoparticle morphology, interfacial molecule distribution and diffusion, catalytic reaction processes, and the prediction of catalytic materials, though they do not represent an exhaustive list of all possible examples. By leveraging the cost-effectiveness of force field methods, researchers have overcome the limitations of QM, enabling simulations of previously inaccessible phenomena and significantly advancing scientific understanding.

Although force field methods have seen widespread application, they still face several challenges that lead to the inconsistency between the experimental results and the computational simulations. The key to solving this issue lies in improving the precision of constructing PESs using force fields. The first challenge is in database construction. Since experimental data are often costly to obtain, force field fitting typically relies on more readily available computational data. However, computational data often involves too much simplification, which can compromise the accuracy of the PES. To address this, a hybrid dataset can be built, where computational data are used to fit the force field, and the resulting model is validated and fine-tuned using experimental data, thereby reducing the gap between simulation and experiment. The second challenge concerns the architecture of the force field model. QM methods achieve accuracy through electronic-level descriptions, while force field methods still have shortcomings in electrostatic interactions. For example, the QEq method, which is primarily used for static charge distribution estimation, cannot deal well with the redistribution and transfer of electrons in non-equilibrium systems such as chemical reactions. Therefore, future force field models should aim to improve charge equilibration methods, so as to explicitly consider electrostatic interactions and non-local effects. Furthermore, the tradeoff between the accuracy and computational cost in force field fitting needs to be considered. Force field fitting typically requires significant computational resources and data, especially for high-precision fitting. Traditional fitting methods may require thousands of simulations to adjust and optimize parameters, making the process time-consuming. As a solution, the concept of meta-learning could be introduced to fit force fields for new systems. By selecting optimal algorithm and parameter configurations based on knowledge acquired from historical fitting tasks, the meta-learning model can provide an effective fitting effect and strong generalization capabilities, even in the case of sparse data.

Beyond the general limitations discussed above, several catalysis-specific challenges remain urgent. First, the transferability of force fields, particularly ML force fields, across different phases, chemical compositions, and surface states is often limited by the diversity of their training data. Addressing this requires multi-fidelity and transfer learning strategies that leverage both large general-purpose datasets (e.g., MP, OCP, MD17) and system-specific fine-tuning. Second, long-range electrostatics and charge transfer remain difficult to capture accurately in heterogeneous environments, especially under dynamic or reactive conditions. Extensions such as polarizable models and explicit charge equilibration schemes within ML frameworks show promise. Third, electrochemical environments, involving applied bias or constant potential conditions, introduce additional complexity for reactive force fields and ML force fields, necessitating integration with constant potential molecular dynamics and implicit/explicit solvation models. Fourth, fitting procedures and data generation for high-dimensional potentials remain computationally expensive, and active learning, data selection, and meta-learning approaches offer routes to reduce cost without sacrificing accuracy. Finally, benchmark datasets such as the OCP, MD17, and MP are critical not only for training but also for enabling reproducible model comparisons, helping guide community progress toward more robust and transferable force fields.

Author contributions

Chenglong Qiu, Tore Brinck and Jiacheng Wang made substantial contributions to discussions of the content. Chenglong Qiu prepared the draft manuscript. Tore Brinck and Jiacheng Wang revised the manuscript before submission.

Conflicts of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Data availability

No primary research results, software or code have been included and no new data were generated or analysed as part of this review.

Acknowledgements

The authors are grateful for the financial support from the National Natural Science Foundation of China (52472231, 52311530113, W2521017) and the Central Guidance on Science and Technology Development Fund of Zhejiang Province (2024ZY01011). T. Brinck acknowledges support by the Swedish Research Council (VR) (2021-05881).

References

  1. Y. Gao and B. Zhu, Simulating Structural Dynamics of Metal Catalysts under Operative Conditions, J. Phys. Chem. Lett., 2024, 15, 8351–8359 CrossRef CAS PubMed.
  2. J. D. Johnson and A. V. Zhukhovitskiy, Hetero-ene Metathesis, ACS Catal., 2024, 14, 21–33 CrossRef CAS.
  3. H. Zhao, X. Lv and Y.-G. Wang, Realistic Modeling of the Electrocatalytic Process at Complex Solid-Liquid Interface, Adv. Sci., 2023, 10, 2303677 CrossRef CAS PubMed.
  4. S. Ma and Z.-P. Liu, Machine Learning for Atomic Simulation and Activity Prediction in Heterogeneous Catalysis: Current Status and Future, ACS Catal., 2020, 10, 13213–13226 CrossRef CAS.
  5. N. Yao, X. Chen, Z.-H. Fu and Q. Zhang, Applying Classical, Ab Initio, and Machine-Learning Molecular Dynamics Simulations to the Liquid Electrolyte for Rechargeable Batteries, Chem. Rev., 2022, 122, 10970–11021 CrossRef CAS PubMed.
  6. Y. He, M. Wang, H. Ji, Q. Cheng, S. Liu, Y. Huan, T. Qian and C. Yan, Molecular Dynamics Simulations for Electrocatalytic CO2 Reduction: Bridging Macroscopic Experimental Observations and Microscopic Explanatory Mechanisms, Adv. Funct. Mater., 2025, 35, 2413703 CrossRef CAS.
  7. M. Born and R. Oppenheimer, Zur Quantentheorie der Molekeln, Ann. Phys., 1927, 389, 457–484 CrossRef.
  8. E. Kocer, T. W. Ko and J. Behler, Neural Network Potentials: A Concise Overview of Methods, Annu. Rev. Phys. Chem., 2022, 73, 163–186 CrossRef CAS PubMed.
  9. G. A. Marchant, M. A. Caro, B. Karasulu and L. B. Pártay, Exploring the configuration space of elemental carbon with empirical and machine learned interatomic potentials, npj Comput. Mater., 2023, 9, 131 CrossRef.
  10. C. Li, Y. Li and B. Jiang, First-principles surface reaction rates by ring polymer molecular dynamics and neural network potential: role of anharmonicity and lattice motion, Chem. Sci., 2023, 14, 5087–5098 RSC.
  11. L. Verlet, Computer “Experiments” on Classical Fluids. I. Thermodynamical Properties of Lennard-Jones Molecules, Phys. Rev., 1967, 159, 98–103 CrossRef CAS.
  12. E. C. Y. Yuan, A. Kumar, X. Guan, E. D. Hermes, A. S. Rosen, J. Zádor, T. Head-Gordon and S. M. Blau, Analytical ab initio hessian from a deep learning potential for transition state optimization, Nat. Commun., 2024, 15, 8865 CrossRef CAS PubMed.
  13. P. Yu, T. Q. Chen, Z. Yang, C. Q. He, A. Patel, Y.-h. Lam, C.-Y. Liu and K. N. Houk, Mechanisms and Origins of Periselectivity of the Ambimodal [6 + 4] Cycloadditions of Tropone to Dimethylfulvene, J. Am. Chem. Soc., 2017, 139, 8251–8258 CrossRef CAS PubMed.
  14. G. Kresse and J. Furthmüller, Efficiency of ab initio total energy calculations for metals and semiconductors using a plane-wave basis set, Comput. Mater. Sci., 1996, 6, 15–50 CrossRef CAS.
  15. T. D. Kühne, M. Iannuzzi, M. Del Ben, V. V. Rybkin, P. Seewald, F. Stein, T. Laino, R. Z. Khaliullin, O. Schütt, F. Schiffmann, D. Golze, J. Wilhelm, S. Chulkov, M. H. Bani-Hashemian, V. Weber, U. Borštnik, M. Taillefumier, A. S. Jakobovits, A. Lazzaro, H. Pabst, T. Müller, R. Schade, M. Guidon, S. Andermatt, N. Holmberg, G. K. Schenter, A. Hehn, A. Bussy, F. Belleflamme, G. Tabacchi, A. Glöβ, M. Lass, I. Bethune, C. J. Mundy, C. Plessl, M. Watkins, J. VandeVondele, M. Krack and J. Hutter, CP2K: An electronic structure and molecular dynamics software package - Quickstep: Efficient and accurate electronic structure calculations, J. Chem. Phys., 2020, 152, 194103 CrossRef PubMed.
  16. A. I. Krylov and P. M. W. Gill, Q-Chem: an engine for innovation, Wiley Interdiscip. Rev.: Comput. Mol. Sci., 2013, 3, 317–326 CAS.
  17. M. J. Frisch, G. W. Trucks, H. B. Schlegel, G. E. Scuseria, M. A. Robb, J. R. Cheeseman, G. Scalmani, V. Barone, G. A. Petersson, H. Nakatsuji, X. Li, M. Caricato, A. V. Marenich, J. Bloino, B. G. Janesko, R. Gomperts, B. Mennucci, H. P. Hratchian, J. V. Ortiz, A. F. Izmaylov, J. L. Sonnenberg, D. Williams-Young, F. Ding, F. Lipparini, F. Egidi, J. Goings, B. Peng, A. Petrone, T. Henderson, D. Ranasinghe, V. G. Zakrzewski, J. Gao, N. Rega, G. Zheng, W. Liang, M. Hada, M. Ehara, K. Toyota, R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda, O. Kitao, H. Nakai, T. Vreven, K. Throssell, J. A. Montgomery Jr., J. E. Peralta, F. Ogliaro, M. J. Bearpark, J. J. Heyd, E. N. Brothers, K. N. Kudin, V. N. Staroverov, T. A. Keith, R. Kobayashi, J. Normand, K. Raghavachari, A. P. Rendell, J. C. Burant, S. S. Iyengar, J. Tomasi, M. Cossi, J. M. Millam, M. Klene, C. Adamo, R. Cammi, J. W. Ochterski, R. L. Martin, K. Morokuma, O. Farkas, J. B. Foresman and D. J. Fox, Gaussian 16 Rev. C.01, 2016 Search PubMed.
  18. J. Wang, H. Xu, Y. Zhang, J. Wu, H. Ma, X. Zhan, J. Zhu and D. Cheng, Discovery of Alloy Catalysts Beyond Pd for Selective Hydrogenation of Reformate via First-Principle Screening with Consideration of H-Coverage, Angew. Chem., Int. Ed., 2024, 63, e202317592 CrossRef CAS PubMed.
  19. J. Liu, S. Wang, Y. Tian, H. Guo, X. Chen, W. Lei, Y. Yu and C. Wang, Screening of Silver-Based Single-Atom Alloy Catalysts for NO Electroreduction to NH3 by DFT Calculations and Machine Learning, Angew. Chem., Int. Ed., 2025, 64, e202414314 CrossRef CAS PubMed.
  20. S. Deng, C. Qiu, Z. Yao, X. Sun, Z. Wei, G. Zhuang, X. Zhong and J. g. Wang, Multiscale simulation on thermal stability of supported metal nanocatalysts, Wiley Interdiscip. Rev.: Comput. Mol. Sci., 2019, 9, e1405 Search PubMed.
  21. J. J. P. Stewart, Optimization of parameters for semiempirical methods VI: more modifications to the NDDO approximations and re-optimization of parameters, J. Mol. Model., 2013, 19, 1–32 CrossRef CAS PubMed.
  22. J. P. Perdew, K. Burke and M. Ernzerhof, Generalized Gradient Approximation Made Simple, Phys. Rev. Lett., 1997, 78, 1396 CrossRef CAS.
  23. D. G. Liakos and F. Neese, Is It Possible To Obtain Coupled Cluster Quality Energies at near Density Functional Theory Cost? Domain-Based Local Pair Natural Orbital Coupled Cluster vs. Modern Density Functional Theory, J. Chem. Theory Comput., 2015, 11, 4054–4063 CrossRef CAS PubMed.
  24. M. S. Gordon and M. W. Schmidt, Advances in electronic structure theory, in. Theory and Applications of Computational Chemistry, 2005, pp. 1167–1189 Search PubMed.
  25. J. E. Jones and S. Chapman, On the determination of molecular fields.—II. From the equation of state of a gas, Proc. R. Soc. London, 1997, 106, 463–477 Search PubMed.
  26. J. E. Lennard-Jones, Processes of adsorption and diffusion on solid surfaces, Trans. Faraday Soc., 1932, 28, 333–359 RSC.
  27. J. D. Weeks, D. Chandler and H. C. Andersen, Role of Repulsive Forces in Determining the Equilibrium Structure of Simple Liquids, J. Chem. Phys., 1971, 54, 5237–5247 CrossRef CAS.
  28. O. T. Unke, S. Chmiela, H. E. Sauceda, M. Gastegger, I. Poltavsky, K. T. Schütt, A. Tkatchenko and K.-R. Müller, Machine Learning Force Fields, Chem. Rev., 2021, 121, 10142–10186 CrossRef CAS PubMed.
  29. Y. Wang, H. Shao, C. Zhang, F. Liu, J. Zhao, S. Zhu, M. K. H. Leung and J. Hu, Molecular dynamics for electrocatalysis: Mechanism explanation and performance prediction, Energy Rev., 2023, 2, 100028 CrossRef.
  30. T. P. Senftle, S. Hong, M. M. Islam, S. B. Kylasa, Y. Zheng, Y. K. Shin, C. Junkermeier, R. Engel-Herbert, M. J. Janik, H. M. Aktulga, T. Verstraelen, A. Grama and A. C. T. van Duin, The ReaxFF reactive force-field: development, applications and future directions, npj Comput. Mater., 2016, 2, 15011 CrossRef CAS.
  31. Y. Han, D. Jiang, J. Zhang, W. Li, Z. Gan and J. Gu, Development, applications and challenges of ReaxFF reactive force field in molecular simulations, Front. Chem. Sci. Eng., 2016, 10, 16–38 CrossRef CAS.
  32. X. Cheng, C. Wu, J. Xu, Y. Han, W. Xie and P. Hu, Leveraging Machine Learning Potentials for In-Situ Searching of Active sites in Heterogeneous Catalysis, Precis. Chem., 2024, 2, 570–586 CrossRef CAS PubMed.
  33. J. Behler, Four Generations of High-Dimensional Neural Network Potentials, Chem. Rev., 2021, 121, 10037–10072 CrossRef CAS PubMed.
  34. P. Schlexer Lamoureux, K. T. Winther, J. A. Garrido Torres, V. Streibel, M. Zhao, M. Bajdich, F. Abild-Pedersen and T. Bligaard, Machine Learning for Computational Heterogeneous Catalysis, ChemCatChem, 2019, 11, 3581–3601 CrossRef CAS.
  35. T. Mou, H. S. Pillai, S. Wang, M. Wan, X. Han, N. M. Schweitzer, F. Che and H. Xin, Bridging the complexity gap in computational heterogeneous catalysis with machine learning, Nat. Catal., 2023, 6, 122–136 CrossRef.
  36. W. Yang, T. T. Fidelis and W.-H. Sun, Machine Learning in Catalysis, From Proposal to Practicing, ACS Omega, 2020, 5, 83–88 CrossRef CAS PubMed.
  37. S. Singh and R. B. Sunoj, Molecular Machine Learning for Chemical Catalysis: Prospects and Challenges, Acc. Chem. Res., 2023, 56, 402–412 CrossRef CAS PubMed.
  38. B. W. J. Chen, L. Xu and M. Mavrikakis, Computational Methods in Heterogeneous Catalysis, Chem. Rev., 2021, 121, 1007–1048 CrossRef CAS PubMed.
  39. O. M. Shambhawi, T. S. Choksi and A. A. Lapkin, The design and optimization of heterogeneous catalysts using computational methods, Catal. Sci. Technol., 2024, 14, 515–532 RSC.
  40. J. A. Harrison, J. D. Schall, S. Maskey, P. T. Mikulski, M. T. Knippenberg and B. H. Morrow, Review of force fields and intermolecular potentials used in atomistic computational materials research, Appl. Phys. Rev., 2018, 5, 031104 Search PubMed.
  41. Q. Mao, M. Feng, X. Z. Jiang, Y. Ren, K. H. Luo and A. C. T. van Duin, Classical and reactive molecular dynamics: Principles and applications in combustion and energy systems, Prog. Energy Combust. Sci., 2023, 97, 101084 CrossRef.
  42. H. Essén, The physics of the born-oppenheimer approximation, Int. J. Quantum Chem., 1977, 12, 721–735 CrossRef.
  43. S. Jiao and Z. Xu, Selective Gas Diffusion in Graphene Oxides Membranes: A Molecular Dynamics Simulations Study, ACS Appl. Mater. Interfaces, 2015, 7, 9052–9059 CrossRef CAS PubMed.
  44. C. Qiu, S. Deng, F. Huan, Y. Sun, Z. Yao, S. Wang, Z. Pan and J. Wang, Molecular-Level Insights into Adsorption and Diffusion Properties of CO and CO2 on Pt-Supported Graphene, J. Phys. Chem. C, 2023, 127, 16117–16124 CrossRef CAS.
  45. J. Wu, S. Yang, M. Williamson, H. S. Wong, T. Bhudia, H. Pu, Q. Yin, D. Ma and W. Chen, Microscopic mechanism of cellulose nanofibers modified cemented gangue backfill materials, Adv. Compos. Hybrid Mater., 2025, 8, 177 CrossRef CAS.
  46. H. Wang, L. Zhang, J. Han and W. E, DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics, Comput. Phys. Commun., 2018, 228, 178–184 CrossRef CAS.
  47. L. Zhang, H. Wang, R. Car and W. E, Phase Diagram of a Deep Potential Water Model, Phys. Rev. Lett., 2021, 126, 236001 CrossRef CAS PubMed.
  48. M. Chen, X. Jiang, L. Zhang, X. Chen, Y. Wen, Z. Gu, X. Li and M. Zheng, The emergence of machine learning force fields in drug design, Med. Res. Rev., 2024, 44, 1147–1182 CrossRef PubMed.
  49. D. Zhang, P. Yi, X. Lai, L. Peng and H. Li, Active machine learning model for the dynamic simulation and growth mechanisms of carbon on metal surface, Nat. Commun., 2024, 15, 344 CrossRef CAS PubMed.
  50. Y. Yue, S. A. Mohamed, N. D. Loh and J. Jiang, Toward a Generalizable Machine-Learned Potential for Metal-Organic Frameworks, ACS Nano, 2025, 19, 933–949 CrossRef CAS PubMed.
  51. G. A. Bramley, O. T. Beynon, P. V. Stishenko and A. J. Logsdail, The application of QM/MM simulations in heterogeneous catalysis, Phys. Chem. Chem. Phys., 2023, 25, 6562–6585 RSC.
  52. L. Chanussot, A. Das, S. Goyal, T. Lavril, M. Shuaibi, M. Riviere, K. Tran, J. Heras-Domingo, C. Ho, W. Hu, A. Palizhati, A. Sriram, B. Wood, J. Yoon, D. Parikh, C. L. Zitnick and Z. Ulissi, Open Catalyst 2020 (OC20) Dataset and Community Challenges, ACS Catal., 2021, 11, 6059–6072 CrossRef CAS.
  53. S. J. Weiner, P. A. Kollman, D. A. Case, U. C. Singh, C. Ghio, G. Alagona, S. Profeta and P. Weiner, A new force field for molecular mechanical simulation of nucleic acids and proteins, J. Am. Chem. Soc., 1984, 106, 765–784 CrossRef CAS.
  54. B. R. Brooks, R. E. Bruccoleri, B. D. Olafson, D. J. States, S. Swaminathan and M. Karplus, CHARMM: A program for macromolecular energy, minimization, and dynamics calculations, J. Comput. Chem., 1983, 4, 187–217 CrossRef CAS.
  55. W. L. Jorgensen and J. Tirado-Rives, The OPLS [optimized potentials for liquid simulations] potential functions for proteins, energy minimizations for crystals of cyclic peptides and crambin, J. Am. Chem. Soc., 1988, 110, 1657–1666 CrossRef CAS PubMed.
  56. A. C. T. van Duin, S. Dasgupta, F. Lorant and W. A. Goddard, ReaxFF: A Reactive Force Field for Hydrocarbons, J. Phys. Chem. A, 2001, 105, 9396–9409 CrossRef CAS.
  57. T. B. Blank, S. D. Brown, A. W. Calhoun and D. J. Doren, Neural network models of potential energy surfaces, J. Chem. Phys., 1995, 103, 4129–4137 CrossRef CAS.
  58. W. L. Jorgensen, D. S. Maxwell and J. Tirado-Rives, Development and Testing of the OPLS All-Atom Force Field on Conformational Energetics and Properties of Organic Liquids, J. Am. Chem. Soc., 1996, 118, 11225–11236 CrossRef CAS.
  59. S. Spicher and S. Grimme, Robust Atomistic Modeling of Materials, Organometallic, and Biochemical Systems, Angew. Chem., Int. Ed., 2020, 59, 15665–15673 CrossRef CAS PubMed.
  60. J. S. Smith, B. T. Nebgen, R. Zubatyuk, N. Lubbers, C. Devereux, K. Barros, S. Tretiak, O. Isayev and A. E. Roitberg, Approaching coupled cluster accuracy with a general-purpose neural network potential through transfer learning, Nat. Commun., 2019, 10, 2903 CrossRef PubMed.
  61. S. Chmiela, H. E. Sauceda, I. Poltavsky, K.-R. Müller and A. Tkatchenko, sGDML: Constructing accurate and data efficient molecular force fields using machine learning, Comput. Phys. Commun., 2019, 240, 38–45 CrossRef CAS.
  62. S. Batzner, A. Musaelian, L. Sun, M. Geiger, J. P. Mailoa, M. Kornbluth, N. Molinari, T. E. Smidt and B. Kozinsky, E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials, Nat. Commun., 2022, 13, 2453 CrossRef CAS PubMed.
  63. A. Musaelian, S. Batzner, A. Johansson, L. Sun, C. J. Owen, M. Kornbluth and B. Kozinsky, Learning local equivariant representations for large-scale atomistic dynamics, Nat. Commun., 2023, 14, 579 CrossRef CAS PubMed.
  64. J. Gasteiger, F. Becker and S. Günnemann, GemNet: Universal Directional Graph Neural Networks for Molecules, arXiv, 2021, preprint, arxiv.2106.08903,  DOI:10.48550/arXiv.2106.08903, https://arxiv.org/abs/2106.08903.
  65. I. Batatia, D. P. Kovács, G. N. C. Simm, C. Ortner and G. Csányi, MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields, arXiv, 2022, preprint, arxiv.2206.07697,  DOI:10.48550/arXiv.2206.07697, https://arxiv.org/abs/2206.07697.
  66. A. K. Rappe and W. A. Goddard, III, Charge equilibration for molecular dynamics simulations, J. Phys. Chem., 1991, 95, 3358–3363 CrossRef CAS.
  67. M. J. Hossain, P. Gaikwad, Y. K. Shin, J. A. Schulze, K. A. Penrod, M. Li, Y. Lin, G. Pawar and A. C. T. van Duin, eReaxFF force field development for BaZr0.8Y0.2O3-δ solid oxide electrolysis cells applications, npj Comput. Mater., 2024, 10, 136 CrossRef CAS.
  68. Z. Song, L. Fan, S. Lu, C. Ling, Q. Zhou and J. Wang, Inverse design of promising electrocatalysts for CO2 reduction via generative models and bird swarm algorithm, Nat. Commun., 2025, 16, 1053 CrossRef CAS PubMed.
  69. O. T. Unke and M. Meuwly, PhysNet: A Neural Network for Predicting Energies, Forces, Dipole Moments, and Partial Charges, J. Chem. Theory Comput., 2019, 15, 3678–3693 CrossRef CAS PubMed.
  70. O. T. Unke, S. Chmiela, M. Gastegger, K. T. Schütt, H. E. Sauceda and K.-R. Müller, SpookyNet: Learning force fields with electronic degrees of freedom and nonlocal effects, Nat. Commun., 2021, 12, 7273 CrossRef CAS.
  71. S. Grimme, Semiempirical GGA-type density functional constructed with a long-range dispersion correction, J. Comput. Chem., 2006, 27, 1787–1799 CrossRef CAS PubMed.
  72. A. K. Rappe, C. J. Casewit, K. S. Colwell, W. A. Goddard, III and W. M. Skiff, UFF, a full periodic table force field for molecular mechanics and molecular dynamics simulations, J. Am. Chem. Soc., 1992, 114, 10024–10035 CrossRef CAS.
  73. P. Zhang and G.-W. Chern, Machine learning nonequilibrium electron forces for spin dynamics of itinerant magnets, npj Comput. Mater., 2023, 9, 32 CrossRef.
  74. H. Yu, Y. Zhong, L. Hong, C. Xu, W. Ren, X. Gong and H. Xiang, Spin-dependent graph neural network potential for magnetic materials, Phys. Rev. B, 2024, 109, 144426 CrossRef CAS.
  75. M. F. Crowley, M. J. Williamson and R. C. Walker, CHAMBER: Comprehensive support for CHARMM force fields within the AMBER software, Int. J. Quantum Chem., 2009, 109, 3767–3772 CrossRef CAS.
  76. G. A. Khoury, J. P. Thompson, J. Smadbeck, C. A. Kieslich and C. A. Floudas, Forcefield_PTM: Ab Initio Charge and AMBER Forcefield Parameters for Frequently Occurring Post-Translational Modifications, J. Chem. Theory Comput., 2013, 9, 5653–5674 CrossRef CAS PubMed.
  77. H. Sun, P. Ren and J. R. Fried, The COMPASS force field: parameterization and validation for phosphazenes, Comput. Theor. Polym. Sci., 1998, 8, 229–246 CrossRef CAS.
  78. S. L. Mayo, B. D. Olafson and W. A. Goddard, DREIDING: a generic force field for molecular simulations, J. Phys. Chem., 1990, 94, 8897–8909 CrossRef CAS.
  79. J. M. Seminario, Calculation of intramolecular force fields from second-derivative tensors, Int. J. Quantum Chem., 1996, 60, 1271–1277 CrossRef.
  80. A. E. A. Allen, M. C. Payne and D. J. Cole, Harmonic Force Constants for Molecular Mechanics Force Fields via Hessian Matrix Projection, J. Chem. Theory Comput., 2018, 14, 274–281 CrossRef CAS PubMed.
  81. R. Wang, M. Ozhgibesov and H. Hirao, Analytical hessian fitting schemes for efficient determination of force-constant parameters in molecular mechanics, J. Comput. Chem., 2018, 39, 307–318 CrossRef CAS PubMed.
  82. R. Wang, M. Ozhgibesov and H. Hirao, Partial hessian fitting for determining force constant parameters in molecular mechanics, J. Comput. Chem., 2016, 37, 2349–2359 CrossRef CAS.
  83. U. C. Singh and P. A. Kollman, An approach to computing electrostatic charges for molecules, J. Comput. Chem., 1984, 5, 129–145 CrossRef CAS.
  84. B. H. Besler, K. M. Merz Jr and P. A. Kollman, Atomic charges derived from semiempirical methods, J. Comput. Chem., 1990, 11, 431–439 CrossRef CAS.
  85. C. I. Bayly, P. Cieplak, W. Cornell and P. A. Kollman, A well-behaved electrostatic potential based method using charge restraints for deriving atomic charges: the RESP model, J. Phys. Chem., 1993, 97, 10269–10280 CrossRef CAS.
  86. D. A. Case, H. M. Aktulga, K. Belfon, D. S. Cerutti, G. A. Cisneros, V. W. D. Cruzeiro, N. Forouzesh, T. J. Giese, A. W. Götz, H. Gohlke, S. Izadi, K. Kasavajhala, M. C. Kaymak, E. King, T. Kurtzman, T.-S. Lee, P. Li, J. Liu, T. Luchko, R. Luo, M. Manathunga, M. R. Machado, H. M. Nguyen, K. A. O'Hearn, A. V. Onufriev, F. Pan, S. Pantano, R. Qi, A. Rahnamoun, A. Risheh, S. Schott-Verdugo, A. Shajan, J. Swails, J. Wang, H. Wei, X. Wu, Y. Wu, S. Zhang, S. Zhao, Q. Zhu, T. E. Cheatham, III, D. R. Roe, A. Roitberg, C. Simmerling, D. M. York, M. C. Nagan and K. M. Merz, Jr., AmberTools, J. Chem. Inf. Model., 2023, 63, 6183–6191 CrossRef CAS.
  87. T. Lu, A comprehensive electron wavefunction analysis toolbox for chemists, Multiwfn, J. Chem. Phys., 2024, 161, 082503 CrossRef CAS PubMed.
  88. C. Xie, Z. Niu, D. Kim, M. Li and P. Yang, Surface and Interface Control in Nanoparticle Catalysis, Chem. Rev., 2020, 120, 1184–1249 CrossRef CAS PubMed.
  89. S. Wang, C. Liu, W. Hao, Y. Zhuang, J. Chen, X. Zhu, L. Wang, X. Niu, J. Mao, D. Ma and Q. Zhao, Structural evolution of metal single-atoms and clusters in catalysis: Which are the active sites under operative conditions?, Chem. Sci., 2025, 16, 6203–6218 RSC.
  90. J. Wang, J. Ma, H. Du, R. Ma and J. Wang, Electrifying nitrate conversion: Dual-metal-site catalysts as a game-changer for sustainable NH3 production, Nano Res., 2025 DOI:10.26599/NR.2025.94907798.
  91. X. Zhang, X. Wang, T. Sun, M. Wang, J. Zhu and J. Wang, Design of alternative oxidation processes for hybrid water electrolysis systems: recent progress and perspective, Chem. Commun., 2025, 61, 11353–11363 RSC.
  92. W. B. Zhang, J. Liu, S. H. Lu, H. Zhang, H. Wang, X. D. Wang, Q. P. Cao, D. X. Zhang and J. Z. Jiang, Size effect on atomic structure in low-dimensional Cu-Zr amorphous systems, Sci. Rep., 2017, 7, 7291 CrossRef CAS PubMed.
  93. Z. Luo, G. Zhao, H. Pan and W. Sun, Strong Metal-Support Interaction in Heterogeneous Catalysts, Adv. Energy Mater., 2022, 12, 2201395 CrossRef CAS.
  94. C. Qiu, C. Zhao, X. Sun, S. Deng, G. Zhuang, X. Zhong, Z. Wei, Z. Yao and J.-G. Wang, Multiscale Simulation of Morphology Evolution of Supported Pt Nanoparticles via Interfacial Control, Langmuir, 2019, 35, 6393–6402 CrossRef CAS PubMed.
  95. C. Zhao, C. Qiu, S. Deng, X. Sun, Y. Gao, Y. Cao, H. Zhuo, G. Zhuang, X. Zhong, Z. Wei, Z. Yao and J.-g. Wang, 2D-3D transformation of palladium and gold nanoparticles on functionalized Mo2C by multiscale simulation, Appl. Surf. Sci., 2019, 481, 554–563 CrossRef CAS.
  96. Z. Wei, Z. Zhao, C. Qiu, S. Huang, Z. Yao, M. Wang, Y. Chen, Y. Lin, X. Zhong, X. Li and J. Wang, Tripodal Pd metallenes mediated by Nb2C MXenes for boosting alkynes semihydrogenation, Nat. Commun., 2023, 14, 661 CrossRef CAS.
  97. C. Qiu, Y. Wang, Y. Li, X. Sun, G. Zhuang, Z. Yao, S. Deng and J. Wang, A generalized formula for two-dimensional diffusion of CO in graphene nanoslits with different Pt loadings, Green Energy Environ., 2020, 5, 322–332 CrossRef.
  98. J. Ding, S. Zhou, C. Qiu, Y. Wu, Y. Li, C. Fan, F. Shao, S. Deng and J. Wang, In-situ water separation enhanced methyl glycolate oxidation to methyl glyoxylate by catalytic membrane reactor, J. Membr. Sci., 2025, 714, 123403 CrossRef CAS.
  99. C. Qiu, S. Deng, X. Sun, Y. Gao, Z. Yao, G. Zhuang, S. Wang and J.-g. Wang, Meso-scale simulation on mechanism of Na+-gated water-conducting nanochannels in zeolite NaA, J. Membr. Sci., 2021, 635, 119462 CrossRef CAS.
  100. Z. Cheng, Z. Yao, S. Kong, C. Qiu, A. B. Wong and J. Wang, Advance in Novel Device Design and Microenvironment Modulation for Bismuth-Based CO2-to- Formic Acid Electrocatalysis, Adv. Energy Mater., 2025, e02767 CrossRef CAS.
  101. C. Du, C. Qiu, Z. Fang, P. Li, Y. Gao, J. Wang and W. Chen, Interface hydrophobic tunnel engineering: A general strategy to boost electrochemical conversion of N2 to NH3, Nano Energy, 2022, 92, 106784 CrossRef CAS.
  102. J. D. Gale, L. M. LeBlanc, P. R. Spackman, A. Silvestri and P. Raiteri, A Universal Force Field for Materials, Periodic GFN-FF: Implementation and Examination, J. Chem. Theory Comput., 2021, 17, 7827–7849 CrossRef CAS PubMed.
  103. H. S. Johnston and C. Parr, Activation Energies from Bond Energies. I. Hydrogen Transfer Reactions, J. Am. Chem. Soc., 1963, 85, 2544–2551 CrossRef CAS.
  104. S. J. Stuart, A. B. Tutein and J. A. Harrison, A reactive potential for hydrocarbons with intermolecular interactions, J. Chem. Phys., 2000, 112, 6472–6486 CrossRef CAS.
  105. D. W. Brenner, Empirical potential for hydrocarbons for use in simulating the chemical vapor deposition of diamond films, Phys. Rev. B: Condens. Matter Mater. Phys., 1990, 42, 9458–9471 CrossRef CAS PubMed.
  106. T.-R. Shan, B. D. Devine, J. M. Hawkins, A. Asthagiri, S. R. Phillpot and S. B. Sinnott, Second-generation charge-optimized many-body potential for Si/SiO2 and amorphous silica, Phys. Rev. B: Condens. Matter Mater. Phys., 2010, 82, 235302 CrossRef.
  107. M. Z. Bazant, E. Kaxiras and J. F. Justo, Environment-dependent interatomic potential for bulk silicon, Phys. Rev. B: Condens. Matter Mater. Phys., 1997, 56, 8542–8552 CrossRef CAS.
  108. A. C. T. van Duin, A. Strachan, S. Stewman, Q. Zhang, X. Xu and W. A. Goddard, ReaxFFSiO Reactive Force Field for Silicon and Silicon Oxide Systems, J. Phys. Chem. A, 2003, 107, 3803–3811 CrossRef CAS.
  109. M. F. Russo and A. C. T. van Duin, Atomistic-scale simulations of chemical reactions: Bridging from quantum chemistry to engineering, Nucl. Instrum. Methods Phys. Res., Sect. B, 2011, 269, 1549–1554 CrossRef CAS.
  110. S. Cheung, W.-Q. Deng, A. C. T. van Duin and W. A. Goddard, ReaxFFMgH Reactive Force Field for Magnesium Hydride Systems, J. Phys. Chem. A, 2005, 109, 851–859 CrossRef CAS PubMed.
  111. A. C. T. van Duin, V. S. Bryantsev, M. S. Diallo, W. A. Goddard, O. Rahaman, D. J. Doren, D. Raymand and K. Hermansson, Development and Validation of a ReaxFF Reactive Force Field for Cu Cation/Water Interactions and Copper Metal/Metal Oxide/Metal Hydroxide Condensed Phases, J. Phys. Chem. A, 2010, 114, 9507–9514 CrossRef CAS PubMed.
  112. Q. Zhang, T. Çaǧın, A. van Duin, W. A. Goddard, Y. Qi and L. G. Hector, Adhesion and nonwetting-wetting transition in the Al/α-Al2O3 interface, Phys. Rev. B: Condens. Matter Mater. Phys., 2004, 69, 045423 CrossRef.
  113. K. D. Nielson, A. C. T. van Duin, J. Oxgaard, W.-Q. Deng and W. A. Goddard, Development of the ReaxFF Reactive Force Field for Describing Transition Metal Catalyzed Reactions, with Application to the Initial Stages of the Catalytic Formation of Carbon Nanotubes, J. Phys. Chem. A, 2005, 109, 493–499 CrossRef CAS PubMed.
  114. S. S. Han, J. K. Kang, H. M. Lee, A. C. T. van Duin and W. A. Goddard, III, The theoretical study on interaction of hydrogen with single-walled boron nitride nanotubes. I. The reactive force field ReaxFFHBN development, J. Chem. Phys., 2005, 123, 114703 CrossRef PubMed.
  115. K. Chenoweth, S. Cheung, A. C. T. van Duin, W. A. Goddard and E. M. Kober, Simulations on the Thermal Decomposition of a Poly(dimethylsiloxane) Polymer Using the ReaxFF Reactive Force Field, J. Am. Chem. Soc., 2005, 127, 7192–7202 CrossRef CAS.
  116. C. Qiu, J. Chen, F. Huan, S. Deng, Z. Yao, S. Wang and J. Wang, Curing and Cross-Linking Processes in the Poly(3,3-bis-azidomethyl oxetane)-tetrahydrofuran/Toluene Diisocyanate/Trimethylolpropane System: A Density Functional Theory and Accelerated ReaxFF Molecular Dynamics Investigation, ACS Omega, 2024, 9, 33153–33161 CrossRef CAS PubMed.
  117. K.-i. Nomura, R. K. Kalia, A. Nakano, P. Vashishta, A. C. T. van Duin and W. A. Goddard, Dynamic Transition in the Structure of an Energetic Crystal during Chemical Reactions at Shock Front Prior to Detonation, Phys. Rev. Lett., 2007, 99, 148303 CrossRef PubMed.
  118. A. Strachan, A. C. T. van Duin, D. Chakraborty, S. Dasgupta and W. A. Goddard, Shock Waves in High-Energy Materials: The Initial Chemical Events in Nitramine RDX, Phys. Rev. Lett., 2003, 91, 098301 CrossRef PubMed.
  119. K. Chenoweth, A. C. T. van Duin and W. A. Goddard Iii, The ReaxFF Monte Carlo Reactive Dynamics Method for Predicting Atomistic Structures of Disordered Ceramics: Application to the Mo3VO Catalyst, Angew. Chem., Int. Ed., 2009, 48, 7630–7634 CrossRef CAS PubMed.
  120. K. Chenoweth, A. C. T. van Duin, P. Persson, M.-J. Cheng, J. Oxgaard and W. A. Goddard, III, Development and Application of a ReaxFF Reactive Force Field for Oxidative Dehydrogenation on Vanadium Oxide Catalysts, J. Phys. Chem. C, 2008, 112, 14645–14654 CrossRef CAS.
  121. J. E. Mueller, A. C. T. van Duin and W. A. Goddard, III, Development and Validation of ReaxFF Reactive Force Field for Hydrocarbon Chemistry Catalyzed by Nickel, J. Phys. Chem. C, 2010, 114, 4939–4949 CrossRef CAS.
  122. L. Liu, C. Bai, H. Sun and W. A. Goddard, III, Mechanism and Kinetics for the Initial Steps of Pyrolysis and Combustion of 1,6-Dicyclopropane-2,4-hexyne from ReaxFF Reactive Dynamics, J. Phys. Chem. A, 2011, 115, 4941–4950 CrossRef CAS PubMed.
  123. A. C. T. van Duin, B. V. Merinov, S. S. Jang and W. A. Goddard, ReaxFF Reactive Force Field for Solid Oxide Fuel Cell Systems with Application to Oxygen Ion Transport in Yttria-Stabilized Zirconia, J. Phys. Chem. A, 2008, 112, 3133–3140 CrossRef CAS PubMed.
  124. A. C. T. van Duin, B. V. Merinov, S. S. Han, C. O. Dorso and W. A. Goddard, III, ReaxFF Reactive Force Field for the Y-Doped BaZrO3 Proton Conductor with Applications to Diffusion Rates for Multigranular Systems, J. Phys. Chem. A, 2008, 112, 11414–11422 CrossRef CAS PubMed.
  125. S. S. Han, S.-H. Choi and A. C. T. van Duin, Molecular dynamics simulations of stability of metal-organic frameworks against H2O using the ReaxFF reactive force field, Chem. Commun., 2010, 46, 5713–5715 RSC.
  126. A. C. T. van Duin, J. M. A. Baas and B. van de Graaf, Delft molecular mechanics: a new approach to hydrocarbon force fields. Inclusion of a geometry-dependent charge calculation, J. Chem. Soc., Faraday Trans., 1994, 90, 2881–2895 RSC.
  127. H. R. Larsson, A. C. T. van Duin and B. Hartke, Global optimization of parameters in the reactive force field ReaxFF for SiOH, J. Comput. Chem., 2013, 34, 2178–2189 CrossRef CAS PubMed.
  128. A. Jaramillo-Botero, S. Naserifar and W. A. Goddard, General Multiobjective Force Field Optimization Framework, with Application to Reactive Force Fields for Silicon Carbide, J. Chem. Theory Comput., 2014, 10, 1426–1439 CrossRef CAS PubMed.
  129. C. M. Daksha, J. Yeon, S. C. Chowdhury and J. W. Gillespie Jr, Automated ReaxFF parametrization using machine learning, Comput. Mater. Sci., 2021, 187, 110107 CrossRef CAS.
  130. J. Yeon, S. C. Chowdhury, C. M. Daksha and J. W. Gillespie, Development of Mg/Al/Si/O ReaxFF Parameters for Magnesium Aluminosilicate Glass Using an Artificial Neural Network-Assisted Genetic Algorithm, J. Phys. Chem. C, 2021, 125, 18380–18394 CrossRef CAS.
  131. X. Zhang, J. Song, T. Sun, M. Wang, J. Zhu, Y. Yu and J. Wang, Constructing nanoneedle arrays of heterostructured RuO2–Co3O4 with tip-effect-induced enrichment of reactants for enhanced water oxidation, Chem. Commun., 2025, 61, 8723–8726 RSC.
  132. Y. Feng, X. Wang, J. Ma, N. Wang, Q. Liu, K. Suenaga, W. Chen, J. Zhang, Y. Zhou and J. Wang, A Solid-Solution with Asymmetric Ni-O-Cr Sites for Boosting Protonation toward Anodic Oxidation, Adv. Energy Mater., 2024, 14, 2401501 CrossRef CAS.
  133. J. Wang, W. He, Y. Zong, Y. Tang, J. Wang and R. Ma, Electronic redistribution induced by interaction between ruthenium nanoparticles and Ni–N(O)–C sites boosts alkaline water electrolysis, Chem. Commun., 2024, 60, 9444–9447 RSC.
  134. H. Zhu, J. J. Wang, Z. Xu, Y. Tan and J. Wang, Pd Nanoparticle Size-Dependent H* Coverage for Cu-Catalyzed Nitrate Electro-Reduction to Ammonia in Neutral Electrolyte, Small, 2024, 20, 2404919 CrossRef CAS PubMed.
  135. K. Lu, C.-F. Huo, Y. He, W.-P. Guo, Q. Peng, Y. Yang, Y.-W. Li and X.-D. Wen, The structure-activity relationship of Fe nanoparticles in CO adsorption and dissociation by reactive molecular dynamics simulations, J. Catal., 2019, 374, 150–160 CrossRef CAS.
  136. E. C. Neyts, K. Ostrikov, Z. J. Han, S. Kumar, A. C. T. van Duin and A. Bogaerts, Defect Healing and Enhanced Nucleation of Carbon Nanotubes by Low-Energy Ion Bombardment, Phys. Rev. Lett., 2013, 110, 065501 CrossRef CAS PubMed.
  137. K. Chenoweth, A. C. T. van Duin and W. A. Goddard, ReaxFF Reactive Force Field for Molecular Dynamics Simulations of Hydrocarbon Oxidation, J. Phys. Chem. A, 2008, 112, 1040–1053 CrossRef CAS PubMed.
  138. T. P. Senftle, A. C. T. van Duin and M. J. Janik, Methane Activation at the Pd/CeO2 Interface, ACS Catal., 2016, 7, 327–332 CrossRef.
  139. B. G. Sumpter and D. W. Noid, Potential energy surfaces for macromolecules. A neural network technique, Chem. Phys. Lett., 1992, 192, 455–462 CrossRef CAS.
  140. D. F. R. Brown, M. N. Gibbs and D. C. Clary, Combining ab initio computations, neural networks, and diffusion Monte Carlo: An efficient method to treat weakly bound molecules, J. Chem. Phys., 1996, 105, 7597–7604 CrossRef CAS.
  141. F. V. Prudente, P. H. Acioli and J. J. S. Neto, The fitting of potential energy surfaces using neural networks: Application to the study of vibrational levels of H3+, J. Chem. Phys., 1998, 109, 8801–8808 CrossRef CAS.
  142. S. Lorenz, A. Groß and M. Scheffler, Representing high-dimensional potential-energy surfaces for reactions at surfaces by neural networks, Chem. Phys. Lett., 2004, 395, 210–215 CrossRef CAS.
  143. J. Behler and M. Parrinello, Generalized Neural-Network Representation of High-Dimensional Potential-Energy Surfaces, Phys. Rev. Lett., 2007, 98, 146401 CrossRef PubMed.
  144. A. P. Bartók, R. Kondor and G. Csányi, On representing chemical environments, Phys. Rev. B, 2013, 87, 184115 CrossRef.
  145. M. Rupp, A. Tkatchenko, K.-R. Müller and O. A. von Lilienfeld, Fast and Accurate Modeling of Molecular Atomization Energies with Machine Learning, Phys. Rev. Lett., 2012, 108, 058301 CrossRef PubMed.
  146. S. Jindal, S. Chiriki and S. S. Bulusu, Spherical harmonics based descriptor for neural network potentials: Structure and dynamics of Au147 nanocluster, J. Chem. Phys., 2017, 146, 204301 CrossRef PubMed.
  147. H. Huo and M. Rupp, Unified representation of molecules and crystals for machine learning, Mach. Learn.: Sci. Technol., 2022, 3, 045017 Search PubMed.
  148. A. P. Bartók, M. C. Payne, R. Kondor and G. Csányi, Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, without the Electrons, Phys. Rev. Lett., 2010, 104, 136403 CrossRef PubMed.
  149. A. P. Bartók and G. Csányi, Gaussian approximation potentials: A brief tutorial introduction, Int. J. Quantum Chem., 2015, 115, 1051–1057 CrossRef.
  150. J. S. Smith, O. Isayev and A. E. Roitberg, ANI-1: an extensible neural network potential with DFT accuracy at force field computational cost, Chem. Sci., 2017, 8, 3192–3203 RSC.
  151. X. Gao, F. Ramezanghorbani, O. Isayev, J. S. Smith and A. E. Roitberg, TorchANI: A Free and Open Source PyTorch-Based Deep Learning Implementation of the ANI Neural Network Potentials, J. Chem. Inf. Model., 2020, 60, 3408–3415 CrossRef CAS PubMed.
  152. M. A. Wood and A. P. Thompson, Extending the accuracy of the SNAP interatomic potential form, J. Chem. Phys., 2018, 148, 241721 CrossRef PubMed.
  153. C. W. Rosenbrock, K. Gubaev, A. V. Shapeev, L. B. Pártay, N. Bernstein, G. Csányi and G. L. W. Hart, Machine-learned interatomic potentials for alloys and alloy phase diagrams, npj Comput. Mater., 2021, 7, 24 CrossRef CAS.
  154. K. Yao, J. E. Herr, D. W. Toth, R. McKintyre and J. Parkhill, The TensorMol-0.1 model chemistry: a neural network augmented with long-range physics, Chem. Sci., 2018, 9, 2261–2269 RSC.
  155. F. A. Faber, A. S. Christensen, B. Huang and O. A. von Lilienfeld, Alchemical and structural distribution based representation for universal quantum machine learning, J. Chem. Phys., 2018, 148, 241717 CrossRef PubMed.
  156. T. W. Ko, J. A. Finkler, S. Goedecker and J. Behler, A fourth-generation high-dimensional neural network potential with accurate electrostatics including non-local charge transfer, Nat. Commun., 2021, 12, 398 CrossRef CAS.
  157. A. P. Bartók, J. Kermode, N. Bernstein and G. Csányi, Machine Learning a General-Purpose Interatomic Potential for Silicon, Phys. Rev. X, 2018, 8, 041048 Search PubMed.
  158. J. Vandermause, S. B. Torrisi, S. Batzner, Y. Xie, L. Sun, A. M. Kolpak and B. Kozinsky, On-the-fly active learning of interpretable Bayesian force fields for atomistic rare events, npj Comput. Mater., 2020, 6, 20 CrossRef.
  159. A. V. Shapeev, Moment Tensor Potentials: A Class of Systematically Improvable Interatomic Potentials, Multiscale Model. Simul., 2016, 14, 1153–1173 CrossRef.
  160. R. Drautz, Atomic cluster expansion for accurate and transferable interatomic potentials, Phys. Rev. B, 2019, 99, 014104 CrossRef CAS.
  161. S. A. Ghasemi, A. Hofstetter, S. Saha and S. Goedecker, Interatomic potentials for ionic systems with density functional accuracy based on charge densities obtained by a neural network, Phys. Rev. B: Condens. Matter Mater. Phys., 2015, 92, 045131 CrossRef.
  162. A. M. Tokita and J. Behler, How to train a neural network potential, J. Chem. Phys., 2023, 159, 121501 CrossRef CAS.
  163. J. Finkbeiner, S. Tovey and C. Holm, Generating Minimal Training Sets for Machine Learned Potentials, Phys. Rev. Lett., 2024, 132, 167301 CrossRef CAS PubMed.
  164. Y. Zhang, H. Wang, W. Chen, J. Zeng, L. Zhang, H. Wang and W. E, DP-GEN: A concurrent learning platform for the generation of reliable deep learning based potential energy models, Comput. Phys. Commun., 2020, 253, 107206 CrossRef CAS.
  165. B. Cheng, Latent Ewald summation for machine learning of long-range interactions, npj Comput. Mater., 2025, 11, 80 CrossRef.
  166. C. Zhang, M. F. Calegari Andrade, Z. K. Goldsmith, A. S. Raman, Y. Li, P. M. Piaggi, X. Wu, R. Car and A. Selloni, Molecular-scale insights into the electrical double layer at oxide-electrolyte interfaces, Nat. Commun., 2024, 15, 10270 CrossRef CAS PubMed.
  167. O. T. Unke, M. Stöhr, S. Ganscha, T. Unterthiner, H. Maennel, S. Kashubin, D. Ahlin, M. Gastegger, L. Medrano Sandonas, J. T. Berryman, A. Tkatchenko and K.-R. Müller, Biomolecular dynamics with machine-learned quantum-mechanical force fields trained on diverse chemical fragments, Sci. Adv., 2024, 10, eadn4397 CrossRef CAS PubMed.
  168. I. Batatia, P. Benner, Y. Chiang, A. M. Elena, D. P. Kovács, J. Riebesell, X. R. Advincula, M. Asta, M. Avaylon, W. J. Baldwin, F. Berger, N. Bernstein, A. Bhowmik, S. M. Blau, V. Cărare, J. P. Darby, S. De, F. D. Pia, V. L. Deringer, R. Elijošius, Z. El-Machachi, F. Falcioni, E. Fako, A. C. Ferrari, A. Genreith-Schriever, J. George, R. E. A. Goodall, C. P. Grey, P. Grigorev, S. Han, W. Handley, H. H. Heenen, K. Hermansson, C. Holm, J. Jaafar, S. Hofmann, K. S. Jakob, H. Jung, V. Kapil, A. D. Kaplan, N. Karimitari, J. R. Kermode, N. Kroupa, J. Kullgren, M. C. Kuner, D. Kuryla, G. Liepuoniute, J. T. Margraf, I.-B. Magdău, A. Michaelides, J. H. Moore, A. A. Naik, S. P. Niblett, S. W. Norwood, N. O'Neill, C. Ortner, K. A. Persson, K. Reuter, A. S. Rosen, L. L. Schaaf, C. Schran, B. X. Shi, E. Sivonxay, T. K. Stenczel, V. Svahn, C. Sutton, T. D. Swinburne, J. Tilly, C. Oord, E. Varga-Umbrich, T. Vegge, M. Vondrák, Y. Wang, W. C. Witt, F. Zills and G. Csányi, A foundation model for atomistic materials chemistry, arXiv, 2023, preprint, arxiv.2401.00096,  DOI:10.48550/arXiv.2401.00096.
  169. M. Zhong, K. Tran, Y. Min, C. Wang, Z. Wang, C.-T. Dinh, P. De Luna, Z. Yu, A. S. Rasouli, P. Brodersen, S. Sun, O. Voznyy, C.-S. Tan, M. Askerka, F. Che, M. Liu, A. Seifitokaldani, Y. Pang, S.-C. Lo, A. Ip, Z. Ulissi and E. H. Sargent, Accelerated discovery of CO2 electrocatalysts using active machine learning, Nature, 2020, 581, 178–183 CrossRef CAS PubMed.
  170. V. L. Deringer and G. Csányi, Machine learning based interatomic potential for amorphous carbon, Phys. Rev. B, 2017, 95, 094203 CrossRef.
  171. K. V. J. Jose, N. Artrith and J. Behler, Construction of high-dimensional neural network potentials using environment-dependent atom pairs, J. Chem. Phys., 2012, 136, 194111 CrossRef PubMed.
  172. N. Artrith and J. Behler, High-dimensional neural network potentials for metal surfaces: A prototype study for copper, Phys. Rev. B: Condens. Matter Mater. Phys., 2012, 85, 045439 CrossRef.
  173. M. Yang, L. Bonati, D. Polino and M. Parrinello, Using metadynamics to build neural network potentials for reactive events: the case of urea decomposition in water, Catal. Today, 2022, 387, 143–149 CrossRef CAS.
  174. J. Liu, R. Liu, Y. Cao and M. Chen, Solvation structures of calcium and magnesium ions in water with the presence of hydroxide: a study by deep potential molecular dynamics, Phys. Chem. Chem. Phys., 2023, 25, 983–993 RSC.
  175. J.-C. Liu, L. Luo, H. Xiao, J. Zhu, Y. He and J. Li, Metal Affinity of Support Dictates Sintering of Gold Catalysts, J. Am. Chem. Soc., 2022, 144, 20601–20609 CrossRef CAS.
  176. Q. Lin and B. Jiang, Modeling Equilibration Dynamics of Hyperthermal Products of Surface Reactions Using Scalable Neural Network Potential with First-Principles Accuracy, J. Phys. Chem. Lett., 2023, 14, 7513–7518 CrossRef CAS PubMed.
  177. R. Yin, J. Xia, B. Jiang and H. Guo, Theoretical Insights into Structure Sensitivity in Formate Decomposition Dynamics on Cu Surfaces, ACS Catal., 2023, 13, 14103–14111 CrossRef CAS.
  178. S. Ma, S.-D. Huang and Z.-P. Liu, Dynamic coordination of cations and catalytic selectivity on zinc-chromium oxide alloys during syngas conversion, Nat. Catal., 2019, 2, 671–677 CrossRef CAS.
  179. D. P. Kovács, J. H. Moore, N. J. Browning, I. Batatia, J. T. Horton, Y. Pu, V. Kapil, W. C. Witt, I.-B. Magdău, D. J. Cole and G. Csányi, MACE-OFF: Short-Range Transferable Machine Learning Force Fields for Organic Molecules, J. Am. Chem. Soc., 2025, 147, 17598–17611 CrossRef PubMed.
  180. A. Kabylda, J. T. Frank, S. Suárez-Dou, A. Khabibrakhmanov, L. Medrano Sandonas, O. T. Unke, S. Chmiela, K.-R. Müller and A. Tkatchenko, Molecular Simulations with a Pretrained Neural Network and Universal Pairwise Force Fields, J. Am. Chem. Soc., 2025, 147, 33723–33734 CrossRef CAS PubMed.

This journal is © The Royal Society of Chemistry 2025
Click here to see how this site uses Cookies. View our privacy policy here.