Satyen
Dhamankar‡
,
Shengli
Jiang‡
and
Michael A.
Webb
*
Department of Chemical and Biological Engineering, Princeton University, Princeton, NJ 08544, USA. E-mail: mawebb@princeton.edu
First published on 24th December 2024
Phase separation in multicomponent mixtures is of significant interest in both fundamental research and technology. Although the thermodynamic principles governing phase equilibria are straightforward, practical determination of equilibrium phases and constituent compositions for multicomponent systems is often laborious and computationally intensive. Here, we present a machine-learning workflow that simplifies and accelerates phase-coexistence calculations. We specifically analyze capabilities of neural networks to predict the number, composition, and relative abundance of equilibrium phases of systems described by Flory–Huggins theory. We find that incorporating physics-informed material constraints into the neural network architecture enhances the prediction of equilibrium compositions compared to standard neural networks with minor errors along the boundaries of the stable region. However, introducing additional physics-informed losses does not lead to significant further improvement. These errors can be virtually eliminated by using machine-learning predictions as a warm-start for a subsequent optimization routine. This work provides a promising pathway to efficiently characterize multicomponent phase coexistence.
Design, System, ApplicationAccurate phase coexistence characterization is critical for designing and optimizing systems and processes involving multiple components, yet traditional methods are often slow and computationally expensive. To overcome this, we developed a machine learning workflow grounded in physical principles to streamline and speed up these calculations. Using Flory–Huggins theory, we generated ternary phase diagrams and trained a theory-aware machine learning algorithm to predict equilibrium phases, compositions, and abundances. These predictions serve as an initial guess for numerical optimization, enabling fast and accurate determination of equilibrium states. This approach can be extended beyond ternary systems or applied to other free-energy models to describe a variety of chemical and biological processes. Ultimately, this method offers a promising way to accelerate chemical process simulations and drive innovations in multi-phase separations, as well as other system design workflows. |
At equilibrium, species distribute across phases based on the extremization of an appropriate thermodynamic potential. For example, minimization of the Gibbs energy dictates equilibrium for a system at specified temperature T, pressure p, and global composition xi. Equilibrium phase-coexistence arises when species partition into distinct phases with equal chemical potentials driven by the extremization, rather than forming a homogeneous mixture. For a system at fixed T and p, this yields
μαi(T, p, {xj}α) = μβi(T, p, {xj}β) ∀ i | (1) |
Determining the conditions, expected phases, and the chemical nature of species usually depends on appropriately parameterized equations-of-state or available free-energy models. For condensed phases and binary mixtures, there are several simple free-energy models like the Margules equations,17 the van Laar model,18 or the Guggenheim–Scatchard/Redlich–Kister equation.19 More complex models such as the Wilson models, non-random two-liquid (NRTL) models,20 universal quasi-chemical theory (UNIQUAC),21 UNIQUAC Functional-group Activity Coefficients (UNIFAC) models,13,22,23 Flory–Huggins theory24 can treat multicomponent systems. Although increasing complexity of the free-energy model or equation-of-state can facilitate more accurate representation of physical systems, the underlying calculations and theoretical principles for phase behavior remain the same for simple and complex models alike.
Given a thermodynamic model, calculating phase stability and equilibrium compositions can be approached in various ways. Simple models and binary mixtures may yield algebraic relationships that can be handled analytically or resolved using simple numerical schemes, such as self-consistent iteration or Newton's method. However, characterizing multicomponent phase coexistence typically requires dedicated software and more sophisticated numerical algorithms. Many algorithms are designed to work for only a specific set or number of phases.25 Direct solution methods based on Newton's root-finding algorithm can be effective but are computationally intensive and sensitive to the initial seed. Jindrová et al. refined Newton's algorithm and a successive substitution strategy to locate roots. Additionally, Nichita,26–28 Jindrová,14,16 and Castier29 independently performed volume stability analysis to obtain better initial guesses for the substitution strategy. There has been significant development in generating phase diagrams using constrained backmapping search algorithms.15,30–33
Indirect solution methods, based on thermodynamic principles and geometric criteria established via stability analysis, offer alternative approaches. Examples include Korteweg's tangent construction34 and Binous and Bellagi's arc extension method.35 Michelsen's multi-phase flash algorithm36 minimizes the distance between the tangent plane and the free energy surface to identify coexisting phases. Homotopy methods have also been used to calculate critical and saturation properties of mixtures.37,38 Additionally, Mao et al.39 generalized phase-diagram construction to multicomponent systems using a convex-hull construction40 applied to a discretized free-energy manifold, although accuracy and memory requirements depend on the mesh size. Overall, there is a need for simple, generalizable, and efficient methods for phase-coexistence calculations.
Machine learning (ML) techniques facilitate phase-coexistence calculations, offering prospective advantages relating to time- and memory-efficiency relative to more traditional optimization strategies.41–50 However, many efforts only address the issue of phase stability and neglect consideration of phase composition.42–46 Others have been restricted to binary systems with limited demonstration of more complex mixtures.47–50 Recently, Flory–Huggins (FH) theory has been combined with ML to improve the interpretability and accuracy of mixture behavior predictions, but limitations exist in their ability to handle complex interactions and multicomponent systems beyond binary mixtures.49,50 Nevertheless, such works highlight the potential of ML as part of a generalizable, accurate, efficient, and extensible framework for characterizing multicomponent phase behavior.
Here, we describe a data-driven workflow to characterize the phase behavior of multicomponent systems. Fig. 1 illustrates the overall approach in the context of ternary systems described by Flory–Huggins (FH) theory. Using FH theory as a representative free-energy model, we construct a series of phase diagrams across the model parameter space using labor-intensive methods. This data is then used to develop an ML surrogate model, based on neural network architectures, to predict the number, composition, and relative abundance of equilibrium phases from model parameters and total system composition. Surrogate models optimized with and without physics-informed architectures and loss functions are compared. Errors are assessed for classification (number of equilibrium phases) and regression (composition and abundance of phases). Predictions from the surrogate model, which are computationally efficient and improvable, are then used to warm-start a simple optimization to precisely and accurately characterize the system's phase behavior. This procedure exemplifies an efficient, accurate, and extensible approach to phase-coexistence calculations.
![]() | ||
Fig. 1 Strategy for multi-component phase-coexistence prediction using machine learning. 1036 ternary phase diagrams are generated using the algorithm arc continuation algorithm (eqn (6)–(8)) and convex hull construction algorithm,39 and are used as training data for a physics-informed machine learning (ML) model to classify phase regions and predict equilibrium phase compositions. The ML predictions serve as initial guesses for the Newton-CG method to obtain equilibrium composition predictions. |
The dimensionless, intensive (per lattice site) Helmholtz energy of mixing follows as
![]() | (2) |
Up to a constant, chemical potentials are obtained by partial differentiation of the Helmholtz energy of mixing:
![]() | (3) |
![]() | (4) |
The thermodynamic stability of a mixture is assessed by considering the determinant of the Hessian matrix for the Helmholtz energy. For
![]() | (5) |
![]() | (6) |
![]() | (7) |
![]() | (8) |
For fixed total particle density (ρ = n/V) and constant temperature, Gibbs' phase rules indicate there can be at most three coexisting phases. To characterize three-phase coexistence, there are 12 variables. Nine correspond to the volume fractions in each phase: ϕα, ϕβ, and ϕγ, for which each ϕπ = (ϕAπ, ϕBπ, ϕCπ). Three correspond to the fractional abundances of each phase–wα, wβ, and wγ. Criteria for chemical equilibrium applied to each species across each phase
μαi(T, p, ϕα) = μβi(T, p, ϕβ) = μγi(T, p, ϕγ) | (9) |
![]() | (10) |
![]() | (11) |
Δμαβi(T, p, ϕα, ϕβ) ≡ μβi − μαi = 0 for i ∈ {A, B, C}. | (12) |
![]() | (13) |
1. Define tolerance parameters δ∅ and δ0.
2. Identify and set the critical point to be ϕ*.
3. Generate a random, small perturbation on the composition δϕ, yielding two new compositions: ϕ′ = ϕ* + δϕ and ϕ′′ = ϕ* − δϕ.
4. Use the compositions ϕ′ and ϕ′′ as initial guesses to solve eqn (12), producing coexisting compositions ϕαnew and ϕβnew that are distinct from ϕ*.
5. Set ϕπold ← ϕπnew and use for eqn (13). Set one of the δϕπi (e.g., ϕβB) to a small perturbation and solve for the remaining δϕπi to produce δϕα′ and δϕβ′.
6. Set ϕ′ = ϕα(0) + δϕα′ and ϕ′′ = ϕβ(1) + δϕβ′ and use as initial guesses to solve eqn (12), producing new coexisting compositions ϕαnew and ϕβnew that are those distinct from those prior.
7. Repeat steps 5 and 6 until either ‖ϕαnew − ϕβnew‖ < δ∅, which indicates a closure of the coexistence curves, or when any ϕπi < δ0, which indicates termination at a composition boundary.
8. Verify validity of compositions by checking that all have |H| > 0.
For the calculations described in this paper, δ∅ = δ0 = 10−9. Initial trials for random composition perturbations are set to have a magnitude of 10−6. Equations are solved numerically using fsolve from Python's SciPy module. Occasionally, the trial perturbations resulted in solutions that collapsed back to the critical point or other prior generated points, in which case new perturbations would be attempted with possibly different magnitudes.
The composition space (ϕA, ϕB) is discretized into a mesh of equilateral triangles, or two-dimensional simplices. Using a finer mesh results in more accurate calculations but also increases computational cost and memory requirements; this work uses a simplex edge-length of 0.0002. After generation of the mesh, the free energy surface (FES) is also discretized into points defined by the tuple (ϕA, ϕB, (ϕA, ϕB)). The convex hull (ϕCHA, ϕCHA,
CH(ϕCHA, ϕCHB)) of the FES is calculated using the Quickhull algorithm.53 The convex hull of a non-convex FES will necessarily deform the original simplices and facilitate the identification of cotangent points on the FES. If one of the projected simplices has three unstretched sides (maximum edge length within five times initial mesh size39), the system is homogeneous (no phase-separation). If two sides are stretched (side length greater than five times the initial mesh size), the two farthest vertices are cotangent, indicating two coexisting phases. If all three sides are stretched, the three vertices of the simplex are cotangent, indicating three coexisting phases. With graph theoretic techniques, the number of equilibrium phases and their compositions can be determined.
Parameters for the models are each selected from the range vi ∈ [1, 3] and χij ∈ [1, 3] where values for both ranges are discretized with a resolution of 0.1. Let s denote a parameter set and U denote the set of all possible parameter sets. With the given discretization, the total membership of U is then |U| = 216. Initially, 750 possible parameter sets are randomly selected from U with uniform probability to form S ⊂ U; care is taken to ensure that all parameter sets from this sampling are unique. From this initial sampling, only around 6.6% (≈50) of the selected parameter sets yielded three-phase coexistence. Using these parameter sets to define T ⊂ S, the representation of such rare systems is augmented by generating six additional parameter sets for each parameter set t ∈ T. Each new parameter set t′ is generated from t by adding a Gaussian random vector X. In particular, we use t′ = t + X with X ∼ (0, σ2I) where σ = 0.005. All t′ that yielded three-phase coexistence are collected and added to S, resulting in a final membership of |S| = 1036 parameter sets.
Input and output labels for the dataset are then generated as follows. First, the composition-space of the mixture is discretized into a uniform mesh with resolution 10−4. For each parameter set, if there are more than 1000 single-phase simplices, the centroid of 1000 randomly chosen simplices is added to the database; otherwise, the centroid of all single-phase simplices is added. For double-phase simplices, if there are more than 1000, a random point between the ends of 1000 randomly chosen double-phase separations is generated; otherwise, a random point between each double-phase separation is added. For multiple three-phase separations, a uniform number of points is generated in each region, ensuring a total of 1000 three-phase points in the database. Since the number of single and double-phase simplices are determined by the size of the discretization mesh, the number of data points per each parameter set can vary.
For each tuple (ϕA, ϕB) the number of equilibrium phases, their compositions, and their abundances are recorded. In this fashion, we define an input vector x = (χAB, χBC, χAC, vA, vB, vC, ϕA, ϕB) ∈ 8 that is linked to two outputs. The first output is a one-hot encoded classification vector yc ∈
3, for which a nonzero entry indicates the presence of one, two, or three phases at equilibrium. The second output is a vector yr ≡ (ϕαA, ϕαB, ϕβA, ϕβB, ϕγA, ϕγB, wα, wβ, wγ) ∈
9, which describes the composition and abundances of the equilibrium phases. The phases are ordered such that ϕαA has the minimum value among all ϕA (ϕαA ≤ ϕβA ≤ ϕγA). If two phases have the same ϕA, they are further ordered according to ϕB. Such an ordering ensures a consistent representation of the equilibrium phases.
For systems with a single phase, ϕαA, ϕαB match the inputs ϕA, ϕB, and wα is set to unity; the abundance entries for phases β and γ are set to zero. However, the composition abundance entries for phases β and γ are assigned a value of 1/3. The value 1/3 is chosen to distribute errors uniformly across species. The absolute composition of these species in equilibrium will be determined by the abundance of the respective phases. For systems with two equilibrium phases, entries for the third phase compositions (i.e., ϕγA, ϕγB) are set to 1/3, and the abundance wγ is set to zero.
The basic model architecture consists of three, fully-connected hidden layers, each with m (tunable hyperparameter) hidden units; this yields a hidden vector h ∈ m. This hidden vector is then passed through a “classification layer” with softmax activation to yield ŷc. This vector is also fed into separate “regression layers” to predict the composition (ϕα, ϕβ, ϕγ) and abundance (w). Each regression layer consists of three hidden units, representing the composition of A, B, and C for each phase, and the abundance of α, β, and γ phases. Sigmoid activation is applied to limit predicted values on compositions and abundances to be between zero and unity, which avoids obviously unphysical values; however, overall composition and abundance constraints are not enforced. Since the composition of C depends on A and B, only the predictions for A and B compositions are kept and combined with the abundance predictions to form ŷr.
We also consider a variation on the basic model architecture that enforces consistency between ŷr and the majority class featured in ŷc. This is achieved using a mask-layer that sets abundance entries in ŷr to zero based on the plurality class indicated in ŷc (see dashed box in Fig. 2). For example, if one equilibrium phase is predicted, then abundance entries associated with the β and γ phase are set to zero. If two equilibrium phases are predicted, then entries associated with the γ phase are set to zero. If three equilibrium phases are predicted, then ŷr is preserved from the regression layer. Compositions of species for non-existent phases are set to 1/3, as described earlier. To enforce constraints on overall composition and abundance, the physics-informed (PI) model incorporates softmax activation functions, ensuring that predicted phase compositions and abundances sum to unity. As an alternative approach, following the masking, softmax normalization is applied to the abundances to ensure their sum equals unity.
![]() ![]() ![]() | (14) |
![]() ![]() ![]() ![]() ![]() | (15) |
In eqn (15), the additional loss terms aim to satisfy different constraints on the thermodynamics of physical systems. In particular, split relates to constraints on the total composition of a given species distributed across phases:
![]() | (16) |
![]() | (17) |
![]() | (18) |
The overall training set is further divided into training and validation sets, using a similar five-fold CV approach (inner CV) to the outer CV process. Each fold of the inner CV is trained with 10% of the training data for efficient hyperparameter optimization. Tunable hyperparameters include batch sizes of {5000, 10000, 20
000}, learning rates of {0.001, 0.005, 0.01}, the presence (or absence) of a mask, and the number of neurons selected from {64, 128, 256} for each hidden layer. The optimal hyperparameter setting for each fold is identified by the highest average validation composite score across five sub-folds, calculated as the sum of the F1 score for classification and the average R2 score.
Each fold of the outer CV uses the optimal hyperparameter settings identified from its corresponding five-fold inner CV and retrains the model for up to 500 epochs. The retraining involves selecting 80% of the overall training diagrams as the training set and 20% as the validation set, using the same stratified splitting. During the retraining process, the impact of training data sizes on model performance is assessed by using 1%, 5%, 10%, 20%, 30%, and up to 100% of the training data. Because each diagram contains a different number of data points, the number of training, validation, and test set data points ranges from 1447
511 to 1
458
241, from 358
987 to 366
470, and from 452
308 to 460
074.
The nested CV strategy yields a mean and standard deviation of F1 and R2 scores as determined from the five-fold outer CV test sets. Given the imbalanced phase distribution in the dataset, the F1 score evaluates classification performance, while the R2 score assesses regression accuracy for the variables in yr.
![]() | (19) |
![]() | (20) |
The final optimization employs the Newton-CG optimizer in scipy module in Python. The Jacobian and Hessian matrix for the objective function are computed using the autograd package through automatic differentiation. The maximum number of iterations for optimization is limited to 10000. If newly optimized compositions are within a tolerance of 10−7 of the ideal value of the objective function, these values replace the predictions proffered by the ML model. Optimizations are only considered successful if they satisfy the stability criterion |H
| > 0 (see eqn (6)).
![]() | ||
Fig. 3 Performance summary of the baseline model. a) Classification of the number of coexisting phases. The background color in all phase diagrams denotes the ground truth phase: gray (one-phase), blue (two-phase), and red (three-phase). Scatter points represent the predicted phase splits for a given initial composition, and the legend colors indicate the types of predicted splits. The parameters of the phase diagrams are detailed in ESI† Tables S1 and S2. b) Predicted equilibrium compositions. Blue and orange scatter points represent two-phase equilibrium compositions. The yellow dashed line is a tie line for the two-phase split. Red scatter points depict composition that split into three phases. The red dashed triangle connects the three compositions at equilibrium. c) Confusion matrix for the predicted number of equilibrium phases. Diagonal entries represent correctly classified instances, while off-diagonal entries represent misclassifications. d) Parity plot for predicted equilibrium compositions. The diagonal dashed line represents perfect performance. |
The ML model capably predicts the number of phases at equilibrium with an overall accuracy rate of about 97% (Fig. 3c). The primary source of error (4.6%) stems from misclassifying three-phase points as two-phase, which is attributed to the relative paucity of three-phase splits (only 17% of the total data). Additionally, a portion of two-phase points (2.2%) are misclassified as one-phase. A closer inspection of predicted phase diagrams suggests that misclassifications mostly occur near binodals.
The model also performs well in predicting phase abundances and their compositions (Fig. 3d). By inspection, there are vertical error regions in the predicted abundance for all phases at true abundances of 0 and 1. These errors stem from inaccurate predictions of equilibrium abundance for non-existent phases, such as phases β and γ in a one-phase region and phase γ in a two-phase region. This misprediction also leads to similar error regions for equilibrium compositions around 1/3.
Table 1 provides the baseline expectations for a standard ML model. It highlights nuances in regression performance across different phase regions. The single-phase region has the lowest average MAE (0.006), followed by the two-phase (0.023) and three-phase (0.037) regions. This trend suggests increasing difficulty in predicting compositions as the number of coexisting phases increases. Notably, all R2 values remain high across all phases, with the three-phase region exhibiting a value above 0.88.
MAE | R 2 | |||||
---|---|---|---|---|---|---|
Base | PI | PI+ | Base | PI | PI+ | |
One-phase | 0.006 (0.001) | 0.005 (0.001) | 0.005 (0.001) | 0.982 (0.005) | 0.987 (0.004) | 0.988 (0.003) |
Two-phase | 0.023 (0.003) | 0.022 (0.001) | 0.023 (0.003) | 0.912 (0.015) | 0.915 (0.009) | 0.913 (0.015) |
Three-phase | 0.037 (0.006) | 0.038 (0.003) | 0.038 (0.008) | 0.884 (0.038) | 0.883 (0.023) | 0.889 (0.041) |
The baseline model exhibits lower accuracy compared to the PI and PI+ models in one-phase and two-phase regions, while its performance is comparable to other models in three-phase scenarios (Table 1). The PI and PI+ models show similar performance across all scenarios under these metrics. The coexistence curve predictions of both the PI and PI+ models are similar (Fig. 5a and S6†), producing smooth and physically sensible two- and three-phase coexistence curves. In contrast, the baseline model generates erratic two-phase coexistence curves that significantly deviate from the true curves. This is also evident from the distribution of the MAE loss in Fig. 5b, where the baseline model (red) has a higher average MAE than the PI (blue) and PI+ (green) models, which perform similarly. The unphysical coexistence curves of the baseline model highlight the limitations of using broad performance metrics to assess improvements in predictive accuracy. Errors are better resolved by examining deviations in chemical equilibrium potential (Δμ) and split loss (split), where the baseline model shows significantly larger errors than both the PI and PI+ models.
![]() | ||
Fig. 5 Comparison of performance metrics and physical constraints among baseline, PI, and PI+ models. a) Predicted phase coexistence curves for the baseline (left), PI (middle), and PI+ (right) models. Arrows indicate that both predictions are for the same phase diagram. The background color in all phase diagrams denotes the ground truth phase: gray (one-phase), blue (two-phase), and red (three-phase). Blue and orange scatter points represent two-phase equilibrium compositions. The yellow dashed line is a tie line for the two-phase split. Red scatter points depict composition that split into three phases. The red dashed triangle connects the three compositions at equilibrium. The parameters of the phase diagrams are detailed in ESI† Tables S1 and S2. b) Data distribution (shaded bars) and kernel density estimation fits (lines) for performance metrics and physical constraints. Vertical dashed lines indicate mean values. |
To further examine the impact on composition and abundance constraints, we analyze two additional metrics: unity, which relates to the volume fractions of each species within a given phase, and
weight, which measures overall material conservation. These metrics are defined as:
![]() | (21) |
![]() | (22) |
![]() | ||
Fig. 6 Refinement of PI model predictions. a) Classification of the number of coexisting phases. The background color in all phase diagrams denotes the true phase: gray (one-phase), blue (two-phase), and red (three-phase). The scatter points indicate the predicted phase splits for a given initial composition. Colors in the legend denote the types of predicted splits. The parameters of the phase diagrams are detailed in ESI† Tables S1 and S2. b) Predicted coexistence curves. Blue and orange scatter points indicate two-phase coexistence curves, with the yellow dashed line denoting an example tie line. The vertices of the red triangle indicate three-phase coexistence points. c) Coexistence curves produced with the post-ML optimization strategy. The results are obtained using ML inference to warm-start Newton-CG optimization. |
Errors for models trained on fold 1 data were analysed across a random sample of 187 two-phase and 76 three-phase equilibrium phase diagrams to better characterize post-ML optimization errors (Table 2 and S1†). The results indicate that Newton-CG optimization, initialized with predictions from ML models, achieves near-perfect success rates and significantly reduces deviations from true equilibrium compositions compared to individual ML model predictions. After post-ML optimization, the PI model trained on the full dataset outperforms both the baseline and PI+ models in predicting two-phase and three-phase coexistence. The relative ranking of performances post-optimization aligns well with the relative ranking of the ML predictions alone, underscoring the importance of initial ML prediction accuracy in determining the effectiveness of the post-ML optimization process. With limited training data, all models perform similarly, with PI+ showing slightly better overall performance. Errors remain comparable to those observed with the full dataset, although three-phase coexistence errors consistently exceed those for two-phase coexistence. This disparity likely stems from the relative scarcity of three-phase coexistence in the training set, which increases complexity and complicates precise prediction.
Data size | Two-phase (×10−2) | Three-phase (×10−2) | |||
---|---|---|---|---|---|
ML prediction | Post-optimization | ML prediction | Post-optimization | ||
Base | 100% | 2.40 (0.01) | 0.88 (0.02) | 3.66 (0.03) | 2.76 (0.03) |
PI | 100% | 2.39 (0.01) | 0.87 (0.02) | 3.19 (0.02) | 2.45 (0.03) |
PI+ | 100% | 2.94 (0.01) | 0.95 (0.02) | 4.26 (0.03) | 3.32 (0.03) |
Base | 10% | 2.92 (0.01) | 0.96 (0.02) | 3.93 (0.02) | 2.94 (0.06) |
PI | 10% | 2.95 (0.01) | 1.03 (0.02) | 4.58 (0.03) | 3.56 (0.03) |
PI+ | 10% | 2.36 (0.01) | 0.83 (0.02) | 3.98 (0.03) | 3.00 (0.03) |
The post-ML optimization process is also efficient and parallelizable – taking less than 1 second to converge to the optimal solution (Table S1†). The ML model training requires less than 1 MB for 140000 parameters, a substantial reduction in memory usage compared to arc continuation or convex hull methods, which demand approximately 1 GB of storage and 50 GB of memory per run. Overall, the accurate ML predictions of equilibrium compositions enable rapid convergence to highly accurate solutions, offering significant advantages in both memory- and time-efficiency.
This work motivates several areas of future inquiry. Extensions to systems with more components would increase utility for complex industrial and biological processes. Expanding beyond the Flory–Huggins theory by incorporating other free energy models or data from molecular simulation, perhaps in a single framework, would further enhance its generalizability across diverse chemical systems. Additionally, exploring more advanced physics-informed learning strategies, incorporating uncertainty quantification, and refining neural network architectures could boost prediction efficiency and reliability. Collectively, these directions could enhance both the theoretical and practical impact of leveraging ML for phase coexistence calculations.
Footnotes |
† Electronic supplementary information (ESI) available: Additional optimized phase diagrams; phase classification confusion matrices; equilibrium composition prediction parity plots; post-ML optimization performance. See DOI: https://doi.org/10.1039/d4me00168k |
‡ Equally contributing authors. |
This journal is © The Royal Society of Chemistry 2025 |