Open Access Article
Haizhou
Yang
a,
Adam
Wold
b,
Junlin
Ou
c,
Jeromy J.
Rech
b,
Wei
You
*d and
Yi
Wang
*e
aDepartment of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
bDepartment of Chemistry, University of North Carolina at Asheville, Asheville, NC 28804, USA
cDepartment of Engineering Technology, Middle Tennessee State University, Murfreesboro, TN 37132, USA
dDepartment of Chemistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA. E-mail: wyou@unc.edu
eDepartment of Mechanical Engineering, University of South Carolina, Columbia, SC 29208, USA. E-mail: yiwang@cec.sc.edu
First published on 4th November 2025
Organic solar cells (OSCs) have emerged as a promising renewable energy technology, offering advantages such as lightweight design, semitransparency, flexibility, and cost-effectiveness. Power conversion efficiency (PCE) is a key device performance parameter for OSCs, defined as the ratio of the electrical power output generated by the device to the incident solar power input. Despite significant advances, the development of high-performance OSCs remains a labor-intensive process, heavily dependent on expert experience, involving extensive synthesis, characterization, and iterative optimization. Data-driven methods offer a promising alternative for accelerating material discovery, but their effectiveness is often limited by the scarcity of high-quality experimental data. To overcome this challenge, we propose OSC-Net, a multi-fidelity machine learning framework that integrates a large volume of computational data with a smaller set of high-accuracy experimental measurements. This approach enables accurate prediction of key device performance parameters, including PCE, while simultaneously tackling the challenges associated with experimental data scarcity and uncertainty quantification, enabling efficient screening of OSC materials. Importantly, the predictive capability of OSC-Net was verified against published experimental data, confirming its accuracy and reliability. By leveraging both data sources, OSC-Net achieves superior predictive performance compared to conventional single-fidelity models. Furthermore, the uncertainty quantification captures variability in the model, enhancing the reliability of predictions. Finally, OSC-Net was employed for large-scale high-throughput screening, successfully identifying promising candidates with high predicted PCEs that were validated against literature-reported experimental data. Thus, OSC-Net presents a feasible approach for rapid and accurate inference of device performance parameters with limited experimental datasets, enabling efficient OSC material discovery.
In recent years, advanced machine learning methods have emerged as powerful tools to identify relationships between materials properties and OSC device performance for accelerating OSC research.8,13–15 By leveraging either computational data, such as density functional theory (DFT) calculations and the Scharber model,16 experimental data, or both, machine learning techniques enable the discovery of hidden patterns and critical trends governing OSC performance. For example, Aspuru-Guzik et al.17 employed a Gaussian process model trained on 51
000 computational data points from the Harvard Clean Energy Project database,18 which comprises data derived from DFT calculations and the Scharber model. This model successfully predicted PCE and identified 838 potential high-performing candidate molecules. Similarly, Sun et al. used a convolutional neural network to extract features from molecular structures for PCE prediction,19 also leveraging computational data from the same database. However, computational data, particularly PCE values derived from the Scharber model,16 often suffer from limited accuracy due to simplified assumptions and descriptors. This restricts the predictive accuracy of machine learning models and the reliability of the candidates they identify for achieving high efficiency. To overcome these limitations, recent efforts have focused on training machine learning models using experimental data. Several studies have explored the use of machine learning methods to predict device performance based on single-component properties, either donor or acceptor, by learning their relationships to experimental outcomes.20–23 However, since OSC device performance is critically dependent on the interaction between both donor and acceptor materials, it is necessary to develop models that simultaneously account for both donor and acceptor materials in relation to device performance; indeed, some research studies have explored this direction.24–31 For example, Sun et al. collected a dataset of 1719 donor materials and evaluated different molecular representations as inputs to ML models, finding that fingerprints exceeding 1000 bits achieved the highest prediction accuracy.24 Wu et al. collected 565 donor–acceptor combinations and developed five machine-learning models for PCE inference, identifying that boosted regression trees and random forest models outperformed others in prediction accuracy.25 Similarly, Zhang et al. curated a dataset of 2078 D/A combinations and trained a graph neural network using a newly designed polymer fingerprint representation as input features.31 Despite progress in leveraging machine learning for material discovery, challenges such as limited datasets, low prediction accuracy, and insufficient uncertainty quantification still exist. These issues are particularly critical given the complex relationship between material properties and device performance for an OSC, and the substantial uncertainties inherent in the material synthesis process.
In this study, we established a comprehensive database comprising 47
329 computational entries from ref. 17 and 1782 experimental data points collected from the literature and our laboratory database. Building on this resource, we introduced a multi-fidelity machine learning framework (OSC-Net) to predict device performance parameters, including PCE, with uncertainty quantification, thereby enabling high-throughput screening of OSC materials. Different from previous efforts, this work presents several novelties: (1) a multi-fidelity approach is implemented using a two-step training strategy to integrate data of different fidelity levels. Specifically, the proposed model is pre-trained on a large volume of computational data, which is relatively low in accuracy, and fine-tuned with a smaller but more accurate set of experimental data. Importantly, the predictive capability of OSC-Net is verified against published experimental data, confirming its reliability and practical relevance for OSC material discovery. This represents a pioneering study to investigate the feasibility of a multi-fidelity approach for screening high performance OSC materials. (2) Uncertainty associated with the machine learning model prediction is quantified to provide confidence intervals for PCE predictions. These confidence intervals offer valuable guidance for material discovery and design. (3) Our curated experimental dataset includes both fullerene and non-fullerene acceptors, paired with a broad range of conjugated polymer donors in binary blends, including many of the champion devices throughout history to date. The donor selection spans various levels of synthetic complexity and includes specialized subgroups with subtle alterations to the chemical structures, such as the PBnDT-TAZ series, facilitating analysis of structure–property–performance relationships. Additionally, to account for batch-to-batch variability, a common challenge in OSC materials, we incorporate replicate data from different laboratories, enhancing the ability of the model to quantify experimental uncertainty.
The rest of the paper is organized as follows. Section 2 introduces the methodology of the proposed multi-fidelity machine learning model, including database, model architecture, and uncertainty quantification methods. The results and discussion are presented in Sections 3 and 4, respectively. Finally, conclusions and future work are presented in Section 5.
![]() | (1) |
The OSC-Net structure features a combination of an encoder as the feature extractor and a multilayer perceptron (MLP) as the predictor head, as shown in the blue block in Fig. 1. The model first encodes the fingerprint features of the donor and acceptor materials. The resulting encoded features, along with the donor/acceptor ratio, are passed into the MLP that predicts the three target performance parameters and their corresponding uncertainties.
The development of the OSC-Net framework involves two main stages: training and testing, as illustrated in Fig. 1. In the training stage shown in Fig. 1(a), a two-step training strategy is employed to incorporate data with different fidelity levels, accuracies, and associated costs, specifically, computational and experimental data in this study. Initially, a large amount of computational data, derived from DFT calculations and the Scharber model, was used to pre-train the OSC-Net. This step allows the OSC-Net to capture the general trend of the input–output relationships present in low-fidelity computational data. A relatively smaller but highly accurate set of high-fidelity experimental data was then utilized for fine-tuning the model above, allowing the model to progressively align with the response surface of the experimental data. This sequential strategy, inspired by transfer learning,32 effectively enables the OSC-Net to leverage data from different fidelities, accuracies and costs to create a unified, robust multi-fidelity machine learning model for predicting device performance.33–36
In the testing stage, shown in Fig. 1(b), the fine-tuned model above can rapidly predict the performance parameters given any input materials, which normally takes sub-seconds per prediction and, therefore, can be used for high-throughput material screening.
329 computational entries from ref. 17 and 1782 experimental data points collected from the literature and our laboratory database. All computational data used PCDTBT as the donor and various non-fullerene acceptors constructed from a library of 107 fragments—13 cores, 49 spacers, and 45 terminal groups (see ref. 17). For each acceptor, DFT was used to compute the HOMO energy, LUMO energy, and HOMO–LUMO gap, and the PCE was then predicted via the Scharber model. Note that a fixed donor/acceptor weight ratio of 0.4 (corresponding to 1
:
1.5) is applied to every computational sample. Fig. 2 illustrates the distribution of PCE values for both the computational and experimental datasets. The data are categorized into three groups: low (0–5%), moderate (5–10%), and high (above 10%) PCE values. For the computational dataset, the median PCE value is 1.06%, and the average is 1.85%, with the data split as follows: 45
990 in the low category, 4779 in the moderate category, and 98 in the high category.
In the experimental dataset, the median PCE is 4.88%, and the average is 5.56%, with 914 data points in the low category, 663 in the moderate category, and 205 in the high category. The experimental dataset includes a wide selection of conjugated polymers as donors, with both fullerene and non-fullerene small molecule acceptors. Instances involving the same donor and acceptor material, reported with varying device performance values from different data sources, are all included, allowing for uncertainty quantification.
For the computational dataset, 90% was randomly selected for pretraining, 5% for validation during pretraining, and the remaining 5% for testing. Similarly, for the experimental dataset, 80% was randomly chosen for fine-tuning, while 10% was chosen for validation and the remaining 10% for testing. Various representations of material have been proposed for machine learning-based analysis, including images, SMILES, fingerprints, energy levels, and more.8 In this study, fingerprints are used to represent molecules, serving as the input for the machine learning model to enhance prediction accuracy.24 Specifically, each donor or acceptor material is converted into a 4096-bit binary array to generate Morgan fingerprints.
The model was pre-trained on computational dataset for 500 epochs and subsequently fine-tuned on the experimental dataset, over 7000 epochs. The number of epochs for both stages was determined through iterative experimentation, guided by the convergence behavior of the training and validation losses. The large-scale, low-fidelity computational dataset was primarily used to establish the overall trend of the response surface, and thus required fewer training epochs. In contrast, the smaller but higher-fidelity experimental dataset necessitated a longer fine-tuning phase to allow the model to adapt gradually and fully exploit the available data. The Adam optimizer was utilized with an initial learning rate of 1 × 10−3 and a decaying schedule was employed to reduce the learning rate every 10 epochs if validation performance stagnated. The batch size was set to 20 and 5 for pre-training and fine-tuning, respectively, chosen empirically to balance memory constraints and convergence speed. The training process used mixed precision training via Automatic Mixed Precision (AMP), enabling gradient scaling to mitigate underflow issues in the lower-precision computations. Validation was performed periodically throughout training, with a routine that assessed model performance on a separate validation set. Based on these evaluations, the learning rate scheduler dynamically adjusted learning rates to optimize training progress. The loss curves for both training and validation datasets were monitored across epochs to track convergence trends. The training employed the mean squared error (MSE) loss, which is defined as:
![]() | (2) |
denotes their predicted values. The training was performed on an NVIDIA GeForce RTX 3080 GPU, utilizing PyTorch 1.13.0 for model implementation. The model training took approximately 11 hours for completion.
Epistemic uncertainty, on the other hand, arises from a lack of knowledge or information about the underlying processes, models, or parameters.42 It reflects uncertainty that could potentially be reduced with additional data or improved modeling. In this study, the relationship between input features and OSC device performance is highly complex and may not be fully captured by the current model structure, highlighting the importance of quantifying epistemic uncertainty to account for potential model-driven variability. Several techniques have been developed to quantify epistemic uncertainty in machine learning models, including Bayesian neural networks, Monte Carlo dropout, ensemble methods, and Gaussian processes.37,43–45 In this work, a deep ensemble approach is utilized to estimate the epistemic uncertainty in OSC-Net. Specifically, an ensemble of M = 5 models is explicitly constructed, and a statistical variance is computed from their predictions ŷj, where j = 1, …, M. Mathematically, the epistemic uncertainty, denoted as σe2 = [σe−VOC2, σe−JSC2, σe−FF2] is defined as:
![]() | (3) |
| r value | MSE | σ e | CI accuracy | ||
|---|---|---|---|---|---|
| Computational data | V OC | 0.987 | 0.007 | 0.041 | 73.976 |
| J SC | 0.985 | 0.256 | 0.272 | 86.523 | |
| FF | N/A | 0.000 | 0.016 | 100.000 | |
| PCE | 0.981 | 0.129 | 0.249 | 89.565 | |
| Experimental data | V OC | −0.162 | 1.396 | 0.212 | 0.000 |
| J SC | 0.625 | 112.872 | 0.764 | 4.444 | |
| FF | −0.241 | 256.956 | 0.009 | 0.000 | |
| PCE | 0.437 | 25.285 | 0.973 | 42.222 | |
For the computational dataset, the pre-trained OSC-Net demonstrates strong predictive performance across four device performance parameters (VOC, JSC, FF, and PCE). The model achieves low MSE values of 0.007, 0.256, 0, and 0.129, respectively, and high Pearson correlation coefficients of 0.987, 0.985, N/A, and 0.981, respectively. It is worth noting that, in the computational dataset, FF values are constant at 65%, making prediction trivial and resulting in an MSE of 0. Consequently, the correlation coefficient for FF cannot be computed due to zero standard deviation. For the experimental dataset, the model performs less effectively, yielding higher MSE values of 1.396, 112.872, 256.956, and 25.285 for VOC, JSC, FF, and PCE, respectively, and lower correlation coefficients of −0.162, 0.625, −0.241, and 0.437, respectively. The epistemic uncertainty in experimental datasets is substantially greater than that in computational datasets. Furthermore, the CI accuracy is substantially higher for the computational dataset.
| r value | MSE | σ e | CI accuracy | ||
|---|---|---|---|---|---|
| Fine-tuned OSC-Net | V OC | 0.740 | 0.005 | 0.031 | 61.111 |
| J SC | 0.907 | 6.726 | 0.824 | 47.778 | |
| FF | 0.774 | 69.386 | 2.107 | 40.000 | |
| PCE | 0.921 | 2.680 | 0.465 | 38.889 | |
| SF-OSC-Net | V OC | 0.729 | 0.005 | 0.023 | 54.444 |
| J SC | 0.901 | 7.185 | 0.601 | 34.444 | |
| FF | 0.769 | 70.111 | 1.590 | 31.111 | |
| PCE | 0.920 | 2.711 | 0.338 | 31.111 | |
Both OSC-Net and SF-OSC-Net share nearly identical architectures and training procedures, except that OSC-Net was trained on both computational and experimental data, whereas SF-OSC-Net used only experimental data. While there are clear trends across all device parameters (VOC, JSC, FF, and PCE), OSC-Net consistently outperforms SF-OSC-Net, exhibiting an average 2.5% reduction in MSE and higher correlation coefficients. The OSC-Net exhibits slightly higher uncertainties than SF-OSC-Net, and achieves better CI accuracy, with an average improvement of approximately 9.2%.
As mentioned previously, the PCE data were divided into three groups, low (0–5%), moderate (5–10%), and high (above 10%). Classification performance was evaluated by assigning class labels based on the predicted PCE values and comparing them with the corresponding ground truth labels. Tables 3 and 4 present the confusion matrices for predicting the PCE groups using OSC-Net and SF-OSC-Net, respectively. The diagonal elements of each matrix (highlighted in green) indicate correctly classified instances. Overall, OSC-Net demonstrated a higher classification accuracy compared to SF-OSC-Net, achieving 73.3% versus 71.1%. These results underscore that incorporating multi-fidelity data through OSC-Net leads to more accurate and robust predictions than relying solely on experimental data, validating the effectiveness of the multi-fidelity approach.
385 unique acceptor materials, and 11 different donor-to-acceptor ratios ranging from 0.10 to 0.60 in increments of 0.05, resulting in a total of 728
587
915 potential device configurations. The distribution of the predicted PCE is presented in Fig. 6. Among the combinations evaluated, 768
664 achieved a PCE exceeding 10%, and 4870 achieved a PCE above 15%.
Table 5 compares the key device performance parameters (VOC, JSC, FF, and PCE) of several top-performing blends identified by OSC-Net with corresponding experimental data reported in recent literature published within the last three years.46–50 None of these blends were included in the training databases, and the predicted PCEs closely agree with the experimental values, with percentage differences (PDs) within 3%. Minor discrepancies can be attributed to variations in processing conditions, film morphology, additives, active-layer thickness, polymer molecular weight, and other factors not currently considered by OSC-Net. Finally, Table 6 lists five high-performance donor–acceptor configurations predicted by OSC-Net to exhibit the highest PCEs across all tested configurations. To the best of our knowledge, these combinations have not yet been reported in the literature. The chemical structures of these top-performing donor–acceptor pairs are summarized in Fig. S3 and S4 in the SI.
| # | Donor | Acceptor | D/A ratio | V OC | J SC | FF (%) | PCE | Source | PCE PD (%) |
|---|---|---|---|---|---|---|---|---|---|
| 1 | D18 | DTY-6 | 1 : 1.2 (0.45) |
0.860 | 26.766 | 76.419 | 17.591 | OSC-Net | 2.85 |
| 0.876 | 26.2 | 78.5 | 18.1 | Ref. 46 | |||||
| 2 | D18-Cl | N3 | 1 : 1.9 (0.35) |
0.850 | 26.778 | 76.968 | 17.526 | OSC-Net | 0.78 |
| 0.848 | 27.18 | 75.45 | 17.39 | Ref. 47 | |||||
| 3 | D18 | AQx-2 | 1 : 1.2 (0.45) |
0.856 | 27.379 | 74.895 | 17.553 | OSC-Net | 1.92 |
| 0.868 | 26.1 | 76.0 | 17.22 | Ref. 48 | |||||
| 4 | D18 | N3 | 1 : 1.9 (0.35) |
0.848 | 26.140 | 77.67 | 17.217 | OSC-Net | 1.27 |
| 0.83 | 27.2 | 75.3 | 17.0 | Ref. 49 | |||||
| 5 | D18 | Y6-BO | 1 : 1.9 (0.35) |
0.859 | 26.317 | 75.971 | 17.174 | OSC-Net | 1.55 |
| 0.876 | 26.2 | 73.7 | 16.91 | Ref. 50 |
| # | Donor | Acceptor | D/A ratio | V OC | J SC | FF (%) | PCE |
|---|---|---|---|---|---|---|---|
| 1 | PL1 | A-WSSe-Cl | 1 : 1.2 (0.45) |
0.860 | 26.536 | 77.811 | 17.762 |
| 2 | D18-Cl | SY1 | 1 : 1.5 (0.4) |
0.871 | 27.141 | 74.420 | 17.600 |
| 3 | PM6-Ir1 | SY2 | 1 : 1.2 (0.45) |
0.856 | 26.329 | 77.851 | 17.544 |
| 4 | D18-Cl | BP4T-4F | 1 : 1.5 (0.4) |
0.842 | 27.413 | 75.771 | 17.483 |
| 5 | D18 | Bu-OD-4F | 1 : 1.2 (0.45) |
0.856 | 26.366 | 76.410 | 17.236 |
During inference with the pre-trained OSC-Net, the epistemic uncertainty is notably higher for the experimental dataset than for the computational dataset. This is because the experimental dataset contains more complex materials and falls outside the range of the computational dataset, effectively making inference for the experimental data an extrapolation task. Additionally, larger gaps between the predicted values and the ground truth are associated with higher uncertainty levels. This observation validates the effectiveness of the uncertainty quantification framework, which accurately captures uncertainties associated with each model prediction. When the model is less confident in its predictions, it assigns relatively large uncertainty values to reflect this lack of confidence, indicating the robustness of the framework in handling uncertain predictions.
The pre-trained OSC-Net achieves higher CI accuracy on computational data compared to experimental data. This difference highlights the challenges posed by extrapolation, which impacts both the model prediction and its uncertainty quantification. Despite the CI accuracy for experimental data being lower than that for the computational dataset, the pretrained model still provides a valuable foundation that benefits the subsequent fine-tuning process.
The enhanced performance of OSC-Net over SF-OSC-Net can be attributed to its pretraining on computational datasets, which serves as a bridge to the ultimate goal, capturing the experimental response surface. This pretraining makes fine-tuning more efficient compared to training a model from scratch. However, the performance enhancement is constrained by the limitations of the computational dataset, including use of a single donor material, outdated acceptor material properties and a relatively simple computational approach for evaluating device performance parameters, which results in lower accuracy relative to the experimental data. The slightly higher uncertainties observed in OSC-Net, compared to SF-OSC-Net, result in higher CI accuracy, suggesting that larger and more precise uncertainty estimates accurately capture the variability in the model.
The higher CI accuracy and classification accuracy of OSC-Net over SF-OSC-Net aligns with the CI accuracy achieved during the pretraining stage. These findings demonstrate that OSC-Net is a superior modeling approach in both predictive accuracy and uncertainty quantification for inferring device performance parameters in OSCs.
It is important to note that device performance is influenced by multiple factors beyond donor and acceptor materials and their ratios, including solvents, additives, processing conditions, annealing temperature, layer thicknesses, electrode materials, interfacial layers, etc. In the current OSC-Net framework, only the most significant contributors, donor, acceptor, and D/A ratio, are explicitly modeled, while the other factors are treated as sources of uncertainty. We employ uncertainty quantification to implicitly capture the effect of these unaccounted factors, providing confidence intervals for predictions.
Beyond material synthesis, device optimization (e.g. donor–acceptor pairing, solvent selection, D
:
A ratio, additive choice) remains one of the most time-consuming tasks for developing ideal OSC devices. Leveraging the fine-tuned OSC-Net model, we conducted a high-throughput screening of 728
587
915 potential donor–acceptor combinations, the vast majority of which have not been previously reported. Several of the top-performing blends predicted by OSC-Net were validated against experimental data from the literature, with percentage differences (PDs) within 3%, confirming the accuracy and robustness of the model. In addition, a list of previously unreported high-efficiency configurations was provided to facilitate future experimental investigation by the community. Among the identified candidates, donor materials such as D18 and PM6, along with their derivatives, consistently ranked among the top-performing candidates, demonstrating strong robustness and adaptability across diverse pairings. Similarly, many asymmetric Y6 derivatives, such as Bu-OD-4F, BP4T-4F, A-WSSe-Cl, and SY1, also exhibited outstanding performance. Although these asymmetric acceptors often present greater synthetic complexity than their symmetric counterparts, their superior efficiencies in prediction justify further synthesis and experimental investigation. These findings highlight the potential of OSC-Net to accelerate the identification of high-efficiency material pairings, thereby reducing the experimental burden and material costs associated with device optimization. As new donor and acceptor materials are synthesized, they can be readily integrated into the OSC-Net framework to guide the selection of promising pairings for optimal performance.
329 computational data points and 1782 experimental data points collected from existing literature. OSC-Net was then developed to incorporate datasets of varying fidelity. The model uses fingerprints of donor and acceptor materials, along with their relative ratios, as inputs to predict device performance parameters and their associated uncertainties, facilitating efficient screening of OSC materials. Compared to previous contributions, the main contributions of this study are: (1) multi-fidelity framework – a two-step pre-training and fine-tuning strategy was implemented to integrate datasets of different fidelities effectively, (2) uncertainty quantification – epistemic uncertainties are quantified, providing confidence intervals for model predictions and (3) curated experimental dataset – both fullerene and non-fullerene acceptors, paired with a broad range of conjugated polymer donors in binary blends, are included.
Our results demonstrate that (1) the pre-trained OSC-Net exhibits strong predictive performance for computational datasets, achieving low MSE values of 0.007, 0.256, 0, and 0.129, and high Pearson correlation coefficients of 0.987, 0.985, N/A, and 0.981 for VOC, JSC, FF, and PCE, respectively. (2) For experimental datasets, the pre-trained OSC-Net successfully captures the general input–output trends, demonstrating moderate correlation coefficients and CI accuracy, which enhance the subsequent fine-tuning process. (3) The fine-tuned OSC-Net consistently outperforms SF-OSC-Net, achieving lower MSE values (an average reduction of 2.5%), higher correlation coefficients, higher CI accuracy (an average increase of 9.2%), and higher classification accuracy (an increase of 2.2%). These results confirm that pretraining improves the efficiency of the fine-tuning process, leading to a more accurate model. (4) The fine-tuned OSC-Net provides more accurate uncertainty quantification compared to SF-OSC-Net, as evidenced by improved CI accuracy. This indicates that OSC-Net better captures the variability in the model, leading to more trustworthy predictions. (5) The fine-tuned OSC-Net was applied to a large-scale high-throughput screening, successfully identifying promising candidates with predicted PCEs exceeding 15%. Several of these candidates agree well with reported experimental data, and a list of previously unreported high-efficiency configurations is provided for future investigation. These findings confirm that OSC-Net is a robust computational tool for accurately predicting OSC device performance with uncertainty quantification, enabling the discovery of high-performance OSC materials in scenarios with limited high quality experimental data.
Future research will focus on generating higher-quality computational data for pretraining and expanding experimental datasets for fine-tuning. Subsequently, OSC-Net will be applied to the material design process to identify optimal donor and acceptor materials for OSC devices, followed by experimental validation.
| This journal is © The Royal Society of Chemistry 2026 |