Open Access Article
Yulia Pimonova
*a,
Michael G. Taylorb,
Alice Allen
bcd,
Ping Yang
*b and
Nicholas Lubbers
*a
aComputing and Artificial Intelligence Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA. E-mail: ypimonova@lanl.gov; nlubbers@lanl.gov
bTheoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA. E-mail: pyang@lanl.gov
cCenter for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
dMax Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany
First published on 13th May 2026
Chemists in search of structure–property relationships face great challenges due to limited high quality, concordant datasets. Machine learning (ML) has significantly advanced predictive capabilities in chemical sciences, but these modern data-driven approaches have increased the demand for data. In response to the growing demand for explainable AI (XAI) and to bridge the gap between predictive accuracy and human comprehensibility, we introduce LAMeL—a Linear Algorithm for Meta-Learning that preserves interpretability while improving the prediction accuracy across multiple properties. While most approaches treat each chemical prediction task in isolation, LAMeL leverages a meta-learning framework to identify shared model parameters across related tasks, even if those tasks do not share data, allowing it to learn a common functional manifold that serves as a more informed starting point for new unseen tasks. Our method delivers up to 60–96% reduction in MAE over standard ridge regression, depending on the domain of the dataset. While the degree of performance enhancement varies across tasks, LAMeL consistently outperforms or matches traditional linear methods, making it a reliable tool for chemical property prediction where both accuracy and interpretability are critical.
Modern chemical machine learning faces constant competition between predictive power and interpretability. Deep neural networks, graph neural networks, and other complex architectures have achieved state-of-the-art performance across numerous chemical prediction tasks.12–14 Nevertheless, these models function largely as “black boxes,” making their decision-making processes opaque to human understanding.15 This interpretability challenge is particularly acute in chemistry, where understanding the underlying structure–property relationships is essential to continuous scientific innovation. Chemists have traditionally relied on transparent and mechanistically meaningful models that reveal how specific structural features influence molecular properties.16
On the other hand, linear models are inherently interpretable, which stems from their explicit parameter weights. The coefficients in linear models directly quantify the contribution of each feature, allowing for direct interpretation of prediction results. Although linear models often lag behind neural networks in terms of performance, their transparency and ease of interpretation are compelling incentives to use them, even when they are less accurate.17,18 Recent contribution by Allen and Tkatchenko demonstrates that, with an appropriate featurization scheme, multi-linear regression can achieve performance comparable to more advanced deep learning architectures in predicting materials properties.19 Moreover, linear regression models are faster than neural networks in terms of both training speed and computational resource requirements due to their much simpler design.
The widespread application of ML in the physical sciences meets a major challenge in the pervasive data scarcity in experimental studies. Acquiring chemical data—and, more broadly, any experimental data—is resource-intensive, time-consuming, and expensive. The problem is especially pronounced in drug design20–22 but extends across many areas of chemistry.23,24 When experimental data are scarce, combining low-fidelity simulation with limited high-fidelity experiments can improve accuracy and robustness, as shown by Nevolianis et al. for toluene–water partition coefficients.25 Similarly, Eraqi et al. have demonstrated that multi-task learning over multiple sustainable aviation fuel properties provided benefits in the ultra-low data regime with as few as 29 samples.26 The low-data problem becomes particularly critical when the demand for high-accuracy prediction is high.22,27
Meta-learning has emerged as a powerful framework to address data efficiency challenges across diverse machine learning domains. Unlike methods that treat each task independently, meta-learning seeks to “learn to learn” by leveraging shared structure across related tasks.28,29 This paradigm enables models to acquire transferable knowledge that facilitates rapid adaptation to new tasks, even in low-data regimes. Meta-learning distinguishes itself from other knowledge transfer frameworks such as transfer learning and multitask learning.28 While multitask learning focuses on simultaneously learning multiple tasks to perform well on those same tasks,30 meta-learning is designed to “learn how to learn,” enabling models to quickly adapt to entirely new tasks with minimal examples. This contrasts with transfer learning, which leverages knowledge from previously learned source tasks to enhance performance on a different target task through fine-tuning.31,32 The key distinction of meta-learning lies in emphasis on rapid adaptation to new tasks rather than just applying existing knowledge (transfer learning) or handling multiple known tasks concurrently (multitask learning). This distinction is particularly relevant when tasks are conceptually related but may not share the same datapoints, motivating approaches that leverage cross-task structure without requiring aligned samples. Meta-learning develops a learning capability that allows models to efficiently learn new information with few training examples. Recent studies have demonstrated the promise of meta-learning in chemistry-related applications. For instance, Allen et al.33 showed that incorporating multiple levels of quantum chemical theory within a unified training process can enhance prediction accuracy. Building on this promise, Wang et al.34 integrated meta-learning into the design of a foundation model for chemical reactors, while Singh and Hernández-Lobato applied prototypical networks35 to improve selectivity predictions along organic reaction pathways.27
Despite these advances, most existing meta-learning approaches emphasize deep learning architectures that prioritize predictive performance at the expense of interpretability. Qian et al.36 specifically highlight the lack of interpretability as a major limitation in their few-shot molecular property prediction model. In response, several efforts have aimed to improve interpretability. One strategy involves developing interpretable models that replicate the performance of deep networks, such as the approach proposed by Fabra-Boluda et al.37 More commonly, post-hoc interpretability techniques are employed. These include symbolic metamodels layered on top of neural networks,38 analyses of specific hidden layers,39 regression models based on architectural meta-features,40 and variance decomposition methods such as Meta-ANOVA.41
This limitation highlights a critical knowledge gap: the absence of application-oriented meta-learning algorithms specifically designed for linear models. While there is growing academic interest in this area, most existing efforts remain theoretical and lack practical application to real-world problems. For instance, Tripuraneni et al. introduced a provably sample-efficient algorithm for multi-task linear regression, focusing on learning shared low-dimensional representations across tasks.42 While their contribution offers strong theoretical proof, it does not address practical deployment challenges. Similarly, Denevi et al. proposed a conditional meta-learning approach that tailors representations to individual tasks using side information, offering improved adaptation in clustered task environments, yet their method has not been tested in applied settings.43 Toso et al. extended meta-learning to linear quadratic regulators using a model-agnostic approach, demonstrating theoretical guarantees for controller stability and adaptation, but their focus remains on control theory without broader application.44 These studies underscore the need for developing meta-learning algorithms for linear models that are not only theoretically sound but also practically applicable across diverse real-world domains.
To bridge the gap in applying meta-learning to linear models for chemical domain, we introduce LAMeL—a novel algorithm that reshapes meta-learning principles specifically for linear architectures. LAMeL learns shared parameters across related support tasks, identifying a common functional manifold that serves as an informed initialization for new, unseen tasks. The familiarized starting point enables the meta-model to adapt to new tasks with only a few data points. The presented method is motivated by recent theoretical work on shared low-dimensional structure in linear regression across tasks,42,43 but is designed for applied low-data settings where heterogeneous datasets with limited data overlap, and so missing labels are common. Fig. 1 illustrates the LAMeL workflow, showcasing how meta-learning is applied to linear models for chemical property prediction by leveraging support tasks to enhance performance on a target task. In this work we provide an applied meta-learning algorithm for interpretable linear chemistry models under task structures with minimal sample overlap and missing labels. A practical advantage of LAMeL is that it can operate directly on sparse, non-aligned task data, which is common in molecular property prediction.
The primary contributions of this work include:
• The development of LAMeL, the first meta-learning algorithm specifically designed for linear models in chemistry applications.
• A comprehensive validation of LAMeL across multiple chemical domains, demonstrating performance improvements ranging from 1.1 to 25-fold over classical ridge regression.
• An investigation into the role and importance of the task similarity across support data. By providing an ML tool that preserves interpretability while working in the low-data regime, LAMeL contributes to the broader goal of making ML-acquired results more valuable for chemistry.
S ± 0.7 demonstrates state-of-the-art performance for physics-informed solubility models.
944 solubility values from 1595 studies, creating one of the largest repository for non-aqueous solubility prediction. The dataset spans 1448 unique organic solutes and 213 solvents, with temperature-dependent measurements the range of 243–425 K. Each entry contains structures of solutes and solvents (as SMILES strings), experimental solubility values (as log-values of molarity), temperature, and bibliographic information for the originating study. The breadth and diversity of BigSolDB 2.0 make it a valuable source for benchmarking ML models of solubility.Graphlets operate on the molecular graph in which atoms serve as nodes and bonds as edges. Graphlets are formed from the isomorphism classes of connected subgraphs in the molecular graph; a one-node graphlet constitutes a single atom, a two-node graphlet constitutes two bonded atoms and the associated bond type, and so on for higher-sized molecular fragments. Using graphlet representations in molecular property prediction builds upon the many-body expansion principle in quantum chemistry,55 where properties are approximated as sums of contributions from increasingly complex atomic clusters. The fingerprinting process involves systematically enumerating all graphlets within a molecule up to a predefined maximum graphlet size. Fig. 2 illustrates this process for acetone, where all graphlets up to size 5 are extracted from the molecular graph. Unlike path-based56 or radial fingerprints,57 graphlets capture every kind of substructure, providing a more complete encoding of molecular topology as they identify every possible subgraph. A fast, recursive hashing procedure allows identifying the isomorphism class of each graphlet.
The set of all graphlets in a given dataset can then be assembled as a feature matrix, giving counts of each type of substructure in each molecule in the dataset. This preserves an interpretable relationship between molecular components and their contributions to predicted properties. In our meta-learning framework, model coefficients correspond directly to specific graphlet substructures, and as a result, meta-learned models preserve the interpretability of the graphlet featurization approach. It stands to reason that the structured organization of the features might facilitate knowledge transfer across tasks as it mimics the structure that human chemists use to build chemical intuition.
Each task Ti is characterized by its own dataset, consisting of task features XT and corresponding labels yT. The meta-learning is advantageous over other knowledge-transfer approaches as it does not need the same data point to appear across multiple tasks; rather, each task can have its own unique data distribution and labeling. This flexibility enables meta-learning to handle a diverse range of tasks with sparsely distributed data.
The meta-learning algorithm presented in this work can be considered an optimization-based approach, where the meta-learner aims to find model or initialization parameters that facilitate rapid adaptation to new tasks with minimal data. During meta-training, the model is exposed to multiple support tasks, learning to optimize its parameters such that, when presented with a target task, it can quickly fine-tune to achieve the best performance with minimal data. This approach is particularly effective in scenarios characterized by limited labeled data for new tasks. In such cases, the target task is inherently data-constrained, with only a small number of data points available for training. We refer to these data points as shots, following the convention established in the few-shot learning literature,58,59 where the objective is to achieve robust generalization from a minimal number of training examples. Implementation details, hyperparameter settings, and AutoML tuning budgets for the nonlinear (LightGBM60) and task-conditioned pooled ridge baselines are provided in the SI.
. The model is decomposed into parallel and perpendicular components, that is, β* = β⊥ + β‖. The component β‖ is in-plane with the T-dimensional subspace W‖ generated by the support task coefficients βτ, τ ∈ {T1…TT}, whereas β⊥ is perpendicular to this subspace. We assume that, as tasks may have some relationship to each other, they may be approximated by a lower-rank manifold.42,61 As such, we bias the specialization coefficients β* towards the manifold defined by the models built on support tasks and allow for knowledge distillation from previous learning experiences.
While many forms of bias are possible, we use sequential fitting, which has the advantage of separating out hyperparameter searches. First, we fit within the subspace W‖, and subsequently use this start space to find a residual (intuitively, smaller) component β⊥. We use a ridge loss function as a base regressor.
The first step is to build individual support models by minimizing the support task loss
![]() | (1) |
•
is the squared L2 norm of the regression parameters vector.
• λ ≥ 0 is the regularization parameter controlling the strength of the penalty.
• τ is each of the support tasks in {T1,…, TT}.Ridge regression encourages smaller coefficient values for model's stability and generalization.62
Next, building features which explore W‖ is a matter of dotting the specialization feature matrix X* with the model matrices, yielding new meta-features, X*·βτ, which are the prediction of the support tasks applied to the data in the target task. Thus we see explicitly how meta-learning can operate on disjoint support and target tasks: the models from a support task can still be applied to the target task, and we can use the results of these models as features for forming a new prediction. We build the meta-features using the average support task as the origin for fitting β‖. Setting an origin for fitting encourages the learned coefficients β‖ to stay close to the origin (prior) vector, effectively embedding information from previous learning experiences into the model (Fig. 3). This adjustment aligns with the principles of meta-learning, where knowledge from support tasks informs the learning process for a new task. By incorporating a task-specific or meta-learned prior βprior, this approach enforces the adaptability of linear models in scenarios with limited available data, as the prior knowledge can mitigate over-fitting and improve generalization and transferability to new tasks.
We center these features χiτ = (βτ − β)·χi using the average support model
as an origin for residual fitting. We then build β‖ by finding c ∈
T minimizing the ridge loss function
![]() | (2) |
. This yields the parallel component of the model,
![]() | (3) |
‖ is formally degenerate as there are T coefficients and T − 1 independent features – however, the resulting β‖ is well-defined.) Finally, the residual coefficient β⊥ is found by minimizing the ridge loss function of the residuals εi = yi − xiβ‖ given by
![]() | (4) |
1. Determine support coefficients βτ using the support task data, and construct meta-features χi by applying these models to the target task features.
2. Determine parallel coefficients β‖ using the meta-features χi.
3. Determine perpendicular coefficients β⊥ using the ordinary features xi. The final parameter vector for the specialization task after the few shot learning is:
| β* = β⊥ + β‖ | (5) |
We evaluated a lightweight nonlinear baseline LightGBM60 and a pooled linear baseline based on joint ridge regression with task conditioning to contextualize LAMeL among other popular methods. The selected baselines represent the closest direct comparisons that can be trained on the observed data without introducing a separate missing-data model. LightGBM does not improve over LAMeL in the low-data N-shot regime, while joint task-conditioned ridge shows mixed behavior. In our experiments, the joint regression outperforms LAMeL when the target task is closely aligned with the pooled training data, as seen for acetone and methyl acetate. LAMeL is more useful when the target task is low-data or less typical of the pooled distribution. Practically, we recommend joint regression for interpolation within a family of closely related tasks, and to prefer LAMeL for few-shot transfer to new or chemically atypical tasks, especially when coefficient-level interpretability is desired. We note that joint task conditioning can shift predictive signal from chemically meaningful substructure features to task-identity features, reducing interpretability. Additionally, extending to unseen tasks requires refitting the pooled model. Full implementation details, hyperparameters, and per-task curves are reported in the SI.
The results of our experiments reveal moderate improvements in prediction accuracy when employing meta-learning, with gains diminishing as the number of shots increases across all target tasks (Fig. 4). Notably, predictions for water solubility showed no improvement across all shot sizes. We primarily attribute this lack of improvement to the chemical distinctiveness of water compared to the other solvents (ethanol, acetone, and benzene), which serve as support tasks in this case. Water has the highest dielectric constant of (ε = 80.1) in comparison with the rest: ethanol (ε = 25.3), acetone (ε = 21.0) and benzene is around (ε = 2.27). Water's more polarizing nature and formation of hydrogen-bonding likely reduce its similarity to the other solvents, limiting meta-learning's ability to transfer knowledge from support to target tasks. This observation underscores a key limitation of meta-learning: task similarity among support tasks plays an important role in effective knowledge transfer. When support tasks are chemically or structurally dissimilar to the target task, meta-learning exhibits limited ability to exploit common patterns across tasks.
As before, in experimental setup all of the tasks that are not the target task have been used as support in the meta-training stage. We observed performance improvement with meta-learning for all but one solvent, water. To quantify the effect we calculated the relative improvement
![]() | (6) |
In the true few-shots regime (10–30 training datapoints for the target task) solvents exhibit substantial relative improvements, with achieving up to 60% MAE reduction. This high-variance, high-reward region demonstrates the typical few-shot learning behavior where limited data can yield significant performance gains for well-suited systems. A convergence pattern emerges as relative improvements gradually decrease and stabilize with the increasing number of training points. The high variability observed in the low-shot regime diminishes with more consistent model performance across different solvents. Most solvents converge to a plateau region with relative improvements within 15–30%. Similarly to Boobier et al. dataset, water exhibits consistently negative relative improvement (−10% to 0%) across all shot counts, suggesting fundamental incompatibility with the underlying support tasks. To investigate the individual roles of the parallel and perpendicular components of the parameter vector for the specialization task, we performed ablations which removed either perpendicular (β* = β‖) or parallel (β* = β⊥) contribution. Across solvents, β‖ captures most of the gain, while β⊥ becomes important when the target task is weakly aligned with available support tasks. Full ablation details and per-solvent curves are provided in the SI.
Additionally, we investigated the effect of the number of tasks alone by randomly selecting fixed number of support tasks from a consistent pool, details are provided in the SI. Across different numbers (3, 5, or 10) of randomly chosen tasks, LAMeL performance is very similar, suggesting that the meta-learning gains are not strongly tied to a specific choice of support-set size.
The stark contrast between the two similarity matrices—particularly water's extremely low similarity with other solvents in regression space (average of 0.07 with all support tasks)—provides quantitative support for our hypothesis regarding the role of task similarity for the effectiveness of meta-learning.
However, for the Boobier et al. dataset, it is important to consider dataset size alongside chemical similarity of solvents. The total number of data points per solvent varies (Table 2), with water being the solvent with the largest number of observations. Using water as the target reduces the total points available in its support set if compared to other targets. For example, when ethanol is the target, its support tasks contribute 2348 data points, whereas for water the corresponding total is 1611. While 1611 is substantial in many low-data settings, the smaller support set for water—together with water's chemical dissimilarity to the other solvents—likely contributes to its substantially weaker meta-learning performance. Meta-learning relies primarily on task similarity between the target and support tasks; however, when support tasks are fewer transferable common knowledge might be harder to exploit.
| Max size | Boobier et. al. | BigSolDB2.0 | QM9-MultiXC |
|---|---|---|---|
| 3 | 319 | 380 | 125 |
| 5 | 4992 | 5194 | 3280 |
| 7 | 57 346 |
58 365 |
82 942 |
| Tasks | N datapoints |
|---|---|
| Water | 1432 |
| Ethanol | 695 |
| Benzene | 464 |
| Acetone | 452 |
This dual explanation—chemical distinctiveness and limited support dataset size—provides a more nuanced understanding of why water solubility predictions fail to improve with meta-learning. It highlights two critical caveats in applying meta-learning to small-task datasets:
1. Task similarity remains a prerequisite for effective knowledge transfer.
2. Small support dataset sizes can negatively affect meta-learning performance.
942 parameters on a dataset containing only 50 datapoints leads to significant increases in test error as feature vector complexity grows. In overparameterized scenarios, the model has capacity to capture noise in the training data rather than meaningful patterns only, leading to overfitting. While ridge regression mitigates overfitting by shrinking coefficients, it does not eliminate the problem entirely. This reflects the bias–variance trade-off: with more parameters bias decreases but variance increases substantially, especially in high-dimensional spaces where small perturbations in the data can lead to large changes in predictions. Empirical studies have shown that standard regularization techniques may become less effective in these scenarios unless paired with additional strategies such as dimensionality reduction or adaptive regularization.62,64
The results presented in Fig. 9 highlight the interplay between feature complexity and predictive performance in few-shot meta-learning scenarios. Contrary to the non-meta approach results, the meta-learning framework maintains relatively stable error levels. For the illustrated TZ target, where high-fidelity predictions are inherently more challenging given their absence from the support tasks, meta-learning achieves comparable MAEs between different substructure depths and demonstrates some resilience against overparameterization by avoiding the sharp fall in performance observed in non-meta models. The comparative stability of error across substructure sizes in meta-learning highlights its effectiveness in balancing the bias-variance trade-off, even in few-shot regimes with high-dimensional representations.
055 per task), providing a comprehensive basis for predictions. To further investigate the effect of support data size, we created subsets at 10, 106, 1064, 5322, 10
644, 21
288, 42
577, 85
115 datapoints sampled from the total available support data (106
444 datapoints). Interestingly, across all three target tasks (M06-L_SZ, TPSSH_DZP, and MPBE0KCIS_TZP), the relationship between meta-assisted accuracy improvement and support data size remained consistent. As illustrated in Fig. 10, the meta-learning error metrics remained relatively stable for support dataset sizes ranging from 1064 to 106
444 datapoints. This observation suggests that even a small fraction of the large QM9-MultiXC dataset (approximately 1%) is sufficient to maintain meta-learning efficiency improvements. Nevertheless, when the support sample size dropped below 1%, error metrics consistently increased across all tested shot sizes for the target task. While meta-learning is robust to reductions in support data size within reasonable limits, extremely small datasets compromise its ability for knowledge extraction. Notably, these findings are different from our observations from solubility datasets, where task similarity played a significant role in determining meta-learning performance. Here, the abundance of data in QM9-MultiXC mitigates some of the challenges posed by task dissimilarity, enabling effective knowledge transfer even with limited support task similarity.
The meta-assisted accuracy improvement depends more on the number of shots of the target task than on the size of the support data. An interesting case arises with the M06-L_SZ target task, where accuracy improvements from meta-learning show minimal sensitivity to shot size variations (apart from NS = 5, which performs noticeably worse). This behavior aligns with our hypothesis regarding task similarity: since all results for this target task were generated using five random SZ-based functionals as support tasks, their inherent similarity facilitates efficient knowledge transfer regardless of shot size.
These results highlight meta-learning's potential for enabling high-fidelity molecular energy predictions using lower-fidelity tasks as support—even under constrained data scenarios—while also emphasizing critical limitations when datasets become extremely sparse.
For the solubility datasets, meta-learning yielded up to a 60% increase in accuracy compared to conventional ridge regression. The magnitude of improvement was closely tied to the degree of similarity among support tasks. In both solubility datasets, water—being the most chemically distinct solvent—stood out as the sole case where meta-learning did not overcome baseline accuracy, highlighting the critical role of support task similarity for successful knowledge transfer. Our results demonstrate that the linear meta-learning framework achieves solubility prediction errors that are on par with those reported for deep learning models. For the nine most popular solvents in the BigSolDB 2.0 dataset (Fig. 6), the mean absolute error (MAE) can be consistently reduced to below 0.800 LogS units, with the lowest MAE observed at 0.683 ± 0.007 for n-propanol. This level of accuracy is comparable with literature: Ulrich et al. report an experimental uncertainty of 0.5–0.6 log units and an ML model with RMSE of 0.657 for aqueous solubility,65 MolMerger achieves an average MAE of 0.79 LogS units across solute–solvent pairs,66 AttentiveFP67 and MoGAT,68 both limited to aqueous systems, report RMSE values of 0.61 and 0.478 log units, respectively, while SolPredictor69 reaches an average RMSE of 1.09 log units for aqueous solubility. The ability of our linear meta-learning approach to deliver comparable predictive performance across a chemically diverse set of solvents supports its practical utility in real-world solubility prediction tasks with minimal available data.
For the atomization energy dataset, which involves highly localized electronic properties, linear meta-learning provided the largest relative gains, further supporting the applicability of the method to various tasks and settings. Our study demonstrates the data efficiency achieved by the meta-learning framework: accurate predictions were obtained using as little as 1% (i.e. 1064 datapoints per support task) of the full training data in the QM9-MultiXC dataset, demonstrating the potential of this method for scenarios where data collection is expensive or time-consuming. These findings suggest that meta-learning not only interpolates between tasks but also captures underlying physical and chemical principles, enabling interpretative extrapolation even in low-data regimes.
While the linear nature of the model constrains its capacity to capture complex relationships, its simplicity allows for robust and interpretable performance. LAMeL complements recent neural and symbolic meta-learners by providing a lightweight and interpretable method for small regression datasets. Future work should explore the integration of nonlinear meta-learners for interpretability, the extension of linear meta-learning approach to more chemically diverse and challenging systems, and the incorporation of active learning strategies to further enhance data efficiency and predictive power. Our method can serve as a linear baseline for benchmarking and can be combined with nonlinear learning algorithms, for example, as an interpretable linear head on learned embeddings.70,71
Overall, our results establish linear meta-learning as a powerful and computationally efficient paradigm for molecular property prediction. Beyond improving predictive performance, LAMeL provides a coefficient-level mapping that for graphlet fingerprints highlights which substructural motifs contribute most strongly to a prediction for a given task. The obtained knowledge can guide experimental prioritization by selecting candidate molecules enriched in beneficial motifs for a property of interest, and it can be used to support fragment-based molecular design.72–74 By enabling significant accuracy gains with minimal data, the presented method holds promise for accelerating high-throughput screening and materials discovery, particularly in domains where experimental resources are limited and quick adaptation is essential.
| This journal is © The Royal Society of Chemistry 2026 |