Open Access Article
Aiden Sable
a,
Bander Linjawia,
Kyle Bradburyb,
Jordan Malofc and
Olivier Delaire
*ade
aDepartment of Mechanical Engineering and Materials Science, Duke University, Durham, NC, USA. E-mail: olivier.delaire@duke.edu
bDepartment of Electrical and Computer Engineering, Duke University, Durham, NC, USA
cDepartment of Electrical Engineering and Computer Science, University of Missouri–Columbia, Columbia, MO, USA
dDepartment of Chemistry, Duke University, Durham, NC, USA
eDepartment of Physics, Duke University, Durham, NC, USA
First published on 17th March 2026
Atomic vibrations govern many macroscopic properties of materials, but experiments to comprehensively probe them remain challenging. Inelastic neutron scattering (INS) is a powerful technique to map phonon dispersions in crystals, especially when leveraging modern time-of-flight (ToF) spectrometers with large detectors. However, efficiently and robustly extracting interatomic force constants (FCs) parameterizing phonon dynamics from experimental spectra remains a bottleneck due to the complexity and high dimensionality of ToF INS datasets. Here, we present a machine learning approach for the direct inversion of FCs from single-crystal INS measurements. The framework leverages synthetic training data generated using universal machine-learned force fields and an efficient physics-based forward model. We benchmark two neural architectures–one emphasizing structured latent representation learning and the other direct, supervised spectral regression–across simulated datasets for two materials under idealized and noisy conditions. The latent-representation model is subsequently applied to experimental single-crystal INS data on germanium. The model is shown to reproduce FCs derived from both first-principles simulations and from iterative optimization, and furthermore achieves reliable inference even from sparse, single-orientation measurements representing short data acquisitions. Analysis of the learned latent space reveals semantically continuous and physically interpretable encodings that support strong cross-domain generalization. By bridging theoretical and experimental domains, we establish a path toward rapid inversion of experimental spectra and data-driven interpretation of temperature-dependent lattice dynamics.
However, recovering interatomic FCs directly from experimental spectra remains a significant challenge, especially given the information-rich, high-dimensional, and anisotropic nature of modern single-crystal measurements. Because solving this inversion problem provides a powerful route to enable studies of temperature-dependent lattice dynamics, several approaches have been proposed to estimate interatomic FCs using physics-based or probabilistic frameworks. Among these, Bayesian inference schemes based on replica-exchange Markov chain Monte Carlo have been used to recover FCs from synthetic phonon dispersion data.28,29 Although these methods achieved good performance on a simplified phonon dispersion model, they were computationally intensive, required significant statistical expertise, and lacked validation on experimental data. Beyond probabilistic inference, in prior work,30 a physics-informed hierarchical optimization approach was developed to extract FCs from single-crystal INS spectra by iteratively minimizing the error between measured and simulated S(Q, E) by optimizing a reduced set of symmetry-unique FC parameters. This method successfully reproduced experimental phonon dispersions in silicon, but its overall scalability and automation were limited: it required manual data selection and processing, full forward S(Q, E) simulations at each optimization step, and a priori DFT calculations that may not be readily available in experimental settings or feasible for certain material systems. More broadly, current practices for extracting interatomic FCs from large experimental INS datasets are hampered by the lack of robust workflows to connect the measured S(Q, E) intensities in large four-dimensional (Q, E) domains with a real-space model of lattice dynamics. This need motivates the development of quantitative approaches that can efficiently and robustly extract deeper physical insights from complex experiments, and that could be leveraged to better optimize measurements.
Machine learning (ML) methods have been increasingly adopted across condensed matter physics and the scattering sciences, offering new pathways for automation, acceleration, and physical model extraction.31–34 Further, recent studies have demonstrated that neural networks can accurately predict phonon spectra and densities of states from crystal structures,35–38 underscoring the maturity of ML-based forward modeling. In contrast, ML-based approaches for inverting scattering data to recover physically meaningful parameters have received comparatively less attention. Chang et al.39 demonstrated that neural networks can extract effective interactions from small-angle neutron scattering data, supporting inverse modeling in soft matter systems. In single-crystal diffuse scattering, autoencoders have been employed to infer magnetic Hamiltonian parameters in spin-ice systems by learning compressed representations of three-dimensional scattering patterns.40,41 Within phonon dynamics, Su and Li42 trained a variational autoencoder (VAE) exclusively on DFT-based powder INS simulations to recover interatomic FCs in aluminum, demonstrating the promise of self-supervised ML for inverse phonon problems. Despite this progress, existing efforts remain confined to diffraction or powder data, which are inherently lower-dimensional and lack energy-resolved information and directional momentum resolution, respectively. By contrast, single-crystal INS comes with substantial experimental and computational challenges: data reduction and processing are complex, information content and signal quality vary strongly across the data volume, and analogous forward modeling is computationally demanding. These factors have so far precluded the development of ML-based inversion methods capable of capitalizing on modern experimental single-crystal INS data.
In this work, we present an ML approach for the inversion of interatomic FCs from single-crystal INS data. Our dataset generation procedure relies on uMLFFs to define the bounds of the FC domain, and is made computationally inexpensive due to an efficient physics-based forward model. For the inversion task, we evaluate both a dual model architecture, consisting of a VAE and a separate FC regressor, and a transfer-learned ResNet-18 architecture, benchmarking their performance on two material systems, Ge and Nb. During inference with realistic spectral noise augmentation, we find that the VAE-based approach exhibits superior robustness and domain adaptability. We validate our framework on experimental single-crystal INS measurements of Ge, demonstrating close agreement with FCs obtained from both DFT and traditional iterative optimization. We further extend the model to sparse inputs from single-orientation scans representing short acquisitions, where it maintains high predictive accuracy. Finally, a detailed analysis of the VAE latent space behavior reveals semantically continuous, physically aligned encodings that underpin its generalizability across both simulated and experimental domains. Once trained, the VAE-based model enables rapid inversion of experimental spectra with no additional forward simulations. Overall, this framework provides a potential path toward real-time inversion and experimental steering in neutron scattering workflows, and provides a promising route to directly extract physically meaningful information about a material's potential energy surface from temperature-dependent measurements of its collective vibrational dynamics.
The forward model is grounded in Born–von Kármán lattice dynamics and is designed for computational efficiency. Simulations use symmetry-reduced FCs–obtained by consolidating tensor elements invariant under the crystal's space group operations–thus significantly reducing the number of parameters. Additionally, the model omits computationally intensive steps such as phonon eigenvector evaluations, Debye–Waller (DW) factors, and polarization effects, based on the assumption that these contributions are effectively averaged through Brillouin zone folding of S(Q, E) data (see SI Fig. 11). In the present work, model validation and experimental inversion were performed at low temperature, where DW attenuation and related thermal broadening effects are minimal. Under such conditions, omission of explicit DW factors introduces negligible distortion in the folded I(q, E) spectra used for training and inference. This approximation enables faster simulations of unpolarized I(q, E) spectra while maintaining sufficient fidelity for ML model training. For applications at elevated temperatures, these effects can be incorporated by performing full S(Q, E) simulations over the experimentally sampled Q, E volume—including temperature-dependent DW factors and polarization terms—prior to folding into I(q, E). Such calculations are readily implemented within existing scattering simulation frameworks (e.g., pathSQE51), and would allow extension of the present data generation procedure to more detailed temperature-dependent effects. From these simulations using perturbed FC values, only dynamically stable spectra are retained, yielding a curated training dataset of 10
000 [ϕ, I(q, E)] pairs. This full uMLFF-based procedure was applied to both Ge and Nb to generate the corresponding training datasets (see SI Section 1).
Herein, we investigate two different model architectures for the FC inversion task. As shown in Stage II(a) of Fig. 1, in the dual-model approach, a VAE and a separate feedforward FC regressor are trained sequentially. The VAE compresses each high-dimensional I(q, E) spectrum into a low-dimensional latent representation and reconstructs it via a decoder. The model is trained by minimizing the evidence lower bound, a dual-objective loss that combines reconstruction error with a scaled Kullback–Leibler divergence. This formulation encourages the latent space to be smooth and well-structured, such that nearby points correspond to similar spectra and samples drawn from the prior distribution decode into physically plausible reconstructions.52 In a VAE, the encoder outputs a mean and variance for each latent dimension, defining the latent activations (µ, σ2) that characterize both the compressed representation and its associated uncertainty. The variational bottleneck is particularly well suited for this task, as the input spectra are sparse and high-dimensional (∼20
000 pixels), while the underlying physical degrees of freedom–symmetry-reduced FCs–are comparatively few. After training, the encoder is frozen, and the FC regressor, a multi-layer feedforward neural network, is trained to predict FC values from the latent encodings. In the direct inversion approach shown in Stage II(b) of Fig. 1, we employ ResNet-18 (ref. 53) and apply transfer learning to train the network to predict FCs directly from input spectra. Unlike the VAE-based model, which provides probabilistic latent representations, the ResNet-18 functions as a deterministic regressor without explicit uncertainty estimation. Owing to its much deeper architecture and residual connections, ResNet-18 has substantially greater representational capacity than the dual-model approach. Both the VAE-based and ResNet-based models were trained on the same simulated datasets of 10
000 I(q, E) spectra, as described in detail in SI Section 2. While both models were trained on the same datasets, note we reserve experimental inference for the VAE-based approach, as ResNet-18 proved more sensitive to spectral noise and underperformed in noisy or domain-shifted settings (see benchmarking results).
The final stage of the framework involves experimental INS data processing and FC inference, as illustrated in Stage III of Fig. 1. Recall the measured INS signal constitutes a four-dimensional volume of scattering intensity as a function of momentum and energy transfer. After standard event-mode data reduction and transformation into S(Q, E),54,55 the data are processed using the pathSQE software to perform automated Brillouin zone folding.51 This produces a lower-dimensional, symmetrized representation ∼ I(q, E) and improves statistical quality by aggregating signal from symmetrically equivalent regions of reciprocal space. Following Min–Max scaling to match the intensity range of the training data, the folded experimental spectrum is passed through the frozen, pre-trained encoder to obtain its latent distribution. Multiple latent samples are drawn and propagated through the frozen FC regressor to generate predicted FCs. This procedure yields both the mean predicted FC values and associated uncertainties based on sample variation. Notably, because the first two stages–training data generation and model training–can be completed in advance, Stage III could support fast, fully automated inference on experimental datasets. In practice, this enables near real-time application of the framework during an experiment, offering the potential for adaptive data collection strategies, on-the-fly assessment of measurement sufficiency, and future integration with experimental steering and optimization workflows.
We begin by evaluating model performance on clean simulated spectra. To visualize reconstruction fidelity across different regions of the FC domains, test spectra with diverse FC configurations were selected. As shown in Fig. 2a and 3a, the VAE produces high-fidelity reconstructions that preserve overall dispersion structure and finer spectral details, indicating that the VAE effectively captures and compresses the complex spectral information across the FC phase space. We next evaluated the accuracy of the FC predictions generated by each of the inversion approaches. The test datasets of 1500 simulated spectra were passed through each of the frozen models to predict their corresponding symmetry-reduced FC values. SI Fig. 16 and 18 show parity plots for the all of the FC parameters across both models, along with their associated R2 and mean absolute error metrics. As summarized in the bar plots in Fig. 2b and 3b, the ResNet model, which maps directly from spectra to FCs without intermediate reconstruction, achieves very high predictive accuracy on the clean test sets for both Ge and Nb, with R2 scores near unity across all FCs. The VAE-based approach, in contrast, exhibits lower, more selective accuracy: it reliably recovers the most influential FCs (e.g., FC0–FC3 for Ge and all except FC5 for Nb) with R2 ≥ 0.9, while its performance declines for weaker terms that are known to have a more limited impact on the spectral features. This distinction highlights a key difference in model behavior. ResNet appears to rely more heavily on fine-grained spectral features to fit all FCs equally well under ideal conditions, while the VAE tends to recover the FCs with the strongest influence on the spectra, since the compressed latent representations preferentially retain high-impact spectra variations. These results are consistent with expectations from the FC sensitivity analysis and underscore the interpretability of the VAE-based approach.
To further assess the generalizability of the models, we evaluated their robustness to realistic spectral noise augmentation applied to the simulated test datasets, for which ground-truth labels remain available. The noise model was designed to reflect the physical and statistical processes of INS, incorporating a tunable constant background component (B) and scalable Poisson counting statistics (P) (see SI Section 4). Fig. 2c and 3c show example test spectra under varied noise levels, along with the corresponding SNRs. To quantify predictive stability, we applied 25 combinations of background and Poisson noise (spanning a 5 × 5 parameter grid, shown in SI Fig. 21–24) across the test datasets and performed FC inference. For each configuration, we computed R2 scores for the predicted FC values relative to ground truth, tracking the degradation in accuracy with increasing noise. The results for FCs 0–3 are shown in Fig. 2d and 3d, while less impactful FCs 4–7 are displayed in SI Fig. 17 and 19. From this analysis, we find the VAE model maintains good performance over a wide range of signal-to-noise ratios, with graceful degradation that more heavily affects the less sensitive FCs. By contrast, ResNet exhibits a sharper decline in accuracy, with noticeable R2 drops for several FCs even under mild noise levels. This suggests that ResNet may overfit to subtle features of the clean forward model spectra, resulting in poor generalization when spectral quality deteriorates. For both materials, VAE-based predictions show greater resilience, consistent with the ability of the latent encoding to filter noise and focus on the dominant phonon features. Importantly, although the VAE model was trained exclusively on clean simulated data, these results indicate it generalizes well to realistic noisy inputs. This behavior supports the domain robustness of the model and, by leveraging known ground-truth labels, offers quantitative validation of its potential for experimental application.
These findings underscore a key distinction: while both models achieve high accuracy under ideal conditions, the VAE-based architecture exhibits superior robustness and domain adaptability, attributes essential for real-world deployment. Its consistent performance across Ge and Nb highlights material generality and the ability to learn transferable representations of vibrational dynamics for different material systems. Given this demonstrated stability, we focus exclusively on the VAE-based model for subsequent experimental inference. Together, these benchmarking results establish a solid quantitative foundation for further applying the framework to noisy, sparse, and domain-shifted experimental data.
We first evaluate reconstruction and FC inversion performance using the full experimental dataset, as shown in Fig. 4a. The VAE-generated reconstruction closely replicates the overall phonon dispersion structure and mode energies present in the measured INS spectrum. Some fine spectral features–such as small energy splittings between closely spaced modes, particularly in the transverse acoustic branches between K–Γ and L–W–are partially blurred in the reconstruction, consistent with the smoothing behavior expected from a VAE. To assess the accuracy of the inferred FCs, we perform a forward phonon dispersion calculation using the ML-predicted FCs and overlay the simulated dispersion on the experimental spectrum, as shown in the right-most column of Fig. 4a. The close visual agreement across the Brillouin zone suggests that the inverted FCs successfully capture the underlying lattice dynamics measured in the experiment. For a more quantitative comparison, we examine the predicted FC values directly against two independent references: those obtained from DFT and those derived via a traditional iterative optimization procedure (see SI Section 6). As shown in Fig. 4e, the ML-inferred FCs are in close agreement with both references, with minor deviations for the least influential terms. Taken together, these results demonstrate that the model provides very accurate experimental inference under full dataset conditions and establishes a high-fidelity baseline for evaluating performance in more data-limited regimes.
We now turn to inference on the more challenging and unconventional single-scan data, which represent one-minute measurements at fixed sample orientations. Because the full dataset is constructed by aggregating many such scans in a set of step-wise sample rotations, we can treat each orientation as a separate test input, yielding approximately 120 independent opportunities to evaluate inference performance. For clarity and balance, we focus first on three representative cases: those with the best, median, and worst FC prediction accuracy relative to the full-dataset result, as shown in Fig. 4b–d. Despite the limited q–E coverage and presence of dispersion discontinuities, the model effectively reconstructs signals present in the sparse data. The reconstruction in Fig. 4b (best-performing scan) exhibits striking similarity to the full-dataset reconstruction in Fig. 4a, demonstrating that the model can recover meaningful structure from minimal input. In contrast, the reconstruction in Fig. 4d (worst-performing scan) appears blurrier, particularly in the optical region where the input lacks usable signal, reflecting the model's uncertainty in the absence of informative features. When evaluating FC inversion, the forward-simulated dispersions based on predicted FCs align well with the experimental data across all three cases. For both the best and median orientations, the calculated dispersions show close agreement with the observed spectrum, with only minor discrepancies in regions of incomplete coverage. Even in the worst case, the model still captures key features in the acoustic region, despite the lack of intensity in the optical branches, which limits inversion accuracy. These results demonstrate that the combined VAE-based approach retains strong inference capabilities even under highly sparse measurement conditions.
Finally, we evaluate the FC predictions across all 120 individual single-scan measurements. A summary of these results is shown in Fig. 4e, where the predicted FCs are plotted as scatter points and colored by sample orientation. For the most influential parameters, FC0 and FC1, the predictions are well-constrained within a narrow subregion of the training domain and generally align with those obtained from the full dataset, DFT, and traditional iterative optimization methods. FCs 2–4, which are less dominant but still influential, show greater variation across orientations–suggesting that additional spectral detail may be required for their accurate recovery, which is not always present in sparse single-scan inputs. In contrast, predictions for the least influential terms, FC6 and FC7, remain narrowly distributed but do not reflect improved accuracy. Instead, this behavior mirrors what was observed during testing on clean simulated data, where the FC regressor failed to capture meaningful trends in the latent encodings for these less impactful parameters, yielding near-constant outputs and R2 scores near zero (see SI Fig. 16). To quantify prediction quality across orientations, we compute the MSE of each scan's predicted FC vector relative to the full-dataset prediction and plot the results as a function of scan angle in Fig. 4f. This analysis shows that the model produces near-perfect predictions for about 10 scan orientations. It also reveals that the worst case (Fig. 4d) is a substantial outlier, exhibiting nearly double the MSE of the next-worst prediction. Interestingly, the MSE distribution shows some angular clustering, suggesting that certain sample orientations consistently capture more informative phonon features and highlighting the directional dependence of information content in INS experiments. Together, these results underscore the model's high accuracy under full-dataset conditions and its good performance even from a single one-minute scan. These findings point toward the potential for real-time experimental integration, where on-the-fly FC inversion could assist data acquisition or help prioritize measurements toward more informative orientations. In practice, preliminary spectra could be rapidly analyzed to compare inversion consistency across orientations, suggesting a pathway toward adaptive measurement workflows. This type of approach would allow experiments to dynamically allocate measurement time to the most informative regions of reciprocal space, thereby improving overall efficiency and the scientific return of limited beamtimes. Realizing a fully closed-loop optimization scheme would require additional strategies for quantifying orientation-dependent information content, which we leave for future work.
000-dimensional INS spectra into just 14 active and well-behaved latent dimensions.
Although the dimensionality of the spectra has already been significantly reduced through latent encoding, applying additional nonlinear dimensionality reduction can yield further insight into the structure and organization of the learned latent space. To this end, we apply Uniform Manifold Approximation and Projection (UMAP),57 a widely used unsupervised ML method known for preserving both local and global structure in high-dimensional data. Specifically, we extract the encoded latent µ from the simulated test set (as shown in Fig. 5a) and pass them to UMAP to produce two-dimensional embeddings that retain the meaningful relationships from the original latent space. The resulting 2D representations are plotted in Fig. 5c–f, with each point corresponding to a single spectrum and colored by one of the symmetry-reduced FC value labels. This analysis reveals a key result regarding the interpretability of the latent space: not only is it semantically organized, as encouraged by the variational loss, but its structure also reflects the underlying physical interactions encoded by the FCs. Most notably, distinct global gradients are observed in the UMAP embeddings for the dominant FCs (FC0 and FC1), with clear directional trends that span the entire latent manifold. Even for less influential FCs (FC2 and FC3), meaningful clustering and local variation persist as shown in the insets, despite their reduced sensitivity and the further compression to only two UMAP dimensions. Similar behavior is also observed in the Nb case, as shown in SI Fig. 20, further highlighting the generality across material systems. To further probe the semantic consistency of the latent space, we linearly interpolated along a latent ray in the UMAP-projected space, shown in Fig. 5h, and decoded the resulting sampled latent vectors. As shown in Fig. 5g, the corresponding reconstructions vary smoothly and capture physically plausible spectral transitions, suggesting that the latent space supports a coherent and continuous spectral manifold. Together, these results demonstrate that the latent space not only encodes semantically relevant structure, but does so in a manner that is coherent, interpretable, and physically aligned.
Finally, as shown in Fig. 6, we examine the domain adaptability of the model–an essential factor underpinning its generalization across data sources. In this work, inference was performed across four distinct domains: clean simulated spectra, noise-augmented simulated spectra, full experimental spectra, and sparse single-scan spectra. To investigate how the encoder treats these varied inputs, we passed the experimental spectra through the frozen encoder to obtain latent embeddings, which were then projected into two dimensions using the pre-fitted UMAP model. For reference and context, we also include the latent embedding of the simulated test spectrum most similar (in MSE) to the full experimental input, along with its noise-augmented variants across the full range of noise settings. The resulting projections are shown in Fig. 6a, where grey points denote the full simulated test set (as in Fig. 5h). Notably, all four data domains–despite differences in noise level, spectral smoothness, and Q–E coverage–are mapped to a compact, shared region of the latent space. This tight clustering suggests that the model effectively identifies the underlying dynamical structure common to these spectra and encodes them consistently, facilitating robust FC inversion across domains. To further illustrate this behavior, Fig. 6b displays input spectra from the four representative cases: the full experimental spectrum, its most similar clean simulation, the most closely embedded noise-augmented spectrum, and the most similar single-scan measurement. Despite substantial visual differences between the spectra, their proximity in the latent space highlights the ability of the model to selectively extract physically meaningful features while ignoring irrelevant variations. This domain-consistent encoding is a key contributor to the model's strong real-world performance and suggests that variational latent spaces can provide a powerful foundation for generalizable, physics-informed spectral inference.
Building on this foundation, we next consider the assumptions and scope of the present framework, as well as avenues for its generalization and adaptive extension to more complex material systems. The framework proceeds under the assumption that the crystal structure is known, constraining the inversion task to the determination of interatomic FCs and excluding the problem of structural identification. This assumption reflects the experimental context of single-crystal INS, where crystallographic descriptors are typically determined independently prior to measurement, and enables the trained regressors to focus solely on mapping spectral features to interatomic interactions. However, highly automated methods exist to determine or refine a crystal structure based on reciprocal space mapping and could be incorporated as a preliminary step. With these structural assumptions in place, we next consider the simulation strategy used to generate training data. An ensemble of uMLFFs is employed to generate data without direct reliance on DFT, thereby reducing computational cost and avoiding dependence on a single DFT functional, which may exhibit known limitations for certain material classes. The present demonstrations on Ge and Nb–systems for which DFT calculations remain tractable–serve as controlled benchmarks to validate model performance and to verify that synthetic data derived from ML surrogate (i.e., non-first-principles) models can yield regressors that accurately capture the relevant physics observed in an experiment. Although the approach is conceptually general, full transferability across atomic species and structural motifs remains non-trivial. In particular, the efficiency of Brillouin zone folding and FC symmetry reduction as dimensionality-lowering strategies depends on crystal symmetry, and lower-symmetry systems may offer reduced redundancy, increasing the effective dimensionality of the inversion problem. Phonon spectra encode system-specific fingerprints that depend nonlinearly on bonding geometry, atomic masses, and crystal symmetry, making universal mapping inherently difficult. Furthermore, extending the ML architectures to incorporate chemical descriptors or symmetry-aware representations, and employing strategies such as active learning or uncertainty-aware retraining, represent promising routes toward improved transferability and adaptive domain coverage. Such mechanisms would allow the model to accommodate experimental spectra that lie outside the nominal training manifold while progressively reducing the need for extensive pre-training.
Beyond improvements to the data generation and ML architectures, physical considerations also point toward several extensions beyond the idealized harmonic case. Although disordered systems pose challenges for conventional Born–von Kármán approaches, virtual-crystal approximations could provide a pathway for mapping effective FCs. The direct inversion of measured intensities (I(q, E) or S(Q, E)) avoids the need for explicit dispersion fitting and extraction, which is particularly advantageous for such materials where band-tracking is intractable. Employing temperature-dependent effective harmonic approaches (TDEP) and incorporating resolution effects, as well as developing deconvolution schemes to recover higher-order anharmonic interactions from spectral linewidths, represent promising next steps toward a more complete experimental inversion toolkit. These considerations delineate the present scope and identify clear directions for extending ML-based inversion approaches to increasingly complex material systems. The framework ultimately points toward real-time, uMLFF-driven characterization of phonon behavior, wherein learned models adaptively refine themselves from experimental data to reveal microscopic lattice dynamics with minimal prior input and human intervention.
Supplementary information (SI): additional figures and text descriptions of training data generation, machine learning model training, inference using trained models, spectral noise implementation, and iterative force constant optimization. See DOI: https://doi.org/10.1039/d6dd00008h.
| This journal is © The Royal Society of Chemistry 2026 |