Jinpeng
Li†
a,
Chuxuan
Ding†
a,
Daobin
Liu†
*a,
Linjiang
Chen
a and
Jun
Jiang
*ab
aState Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China, Hefei, Anhui 230026, China. E-mail: ldbin@ustc.edu.cn; jiangj1@ustc.edu.cn
bHefei National Laboratory, University of Science and Technology of China, Hefei, Anhui 230026, China
First published on 13th June 2025
The emergence of autonomous laboratories—automated robotic platforms integrated with rapidly advancing artificial intelligence (AI)—is poised to transform research by shifting traditional trial-and-error approaches toward accelerated chemical discovery. These platforms combine AI models, hardware, and software to execute experiments, interact with robotic systems, and manage data, thereby closing the predict-make-measure discovery loop. However, key challenges remain, including how to efficiently achieve autonomous high-throughput experimentation and integrate diverse technologies into cohesive systems. In this perspective, we identify the fundamental elements required for closed-loop autonomous experimentation: chemical science databases, large-scale intelligent models, automated experimental platforms, and integrated management/decision-making systems. Furthermore, with the advancement of AI models, we emphasize the progress from simple iterative-algorithm-driven systems to comprehensive intelligent autonomous systems powered by large-scale models in China, which enable self-driving chemical discovery within individual laboratories. Looking ahead, the development of intelligent autonomous laboratories into a distributed network holds great promise for further accelerating chemical discoveries and fostering innovation on a broader scale.
Since the term ‘artificial intelligence (AI)’ was first coined by McCarthy in 1956, it has become a key driver of transformative developments in science.1,2 The 2024 Nobel Prizes in physics3,4 and chemistry5,6 both highlighted advancements in AI, recognizing its transformative role in modeling complex physical systems and predicting biochemical structures. A landmark in the application of AI is its ability to efficiently handle heterogeneous data, enabling the interpretation and understanding of complex datasets in genomics, proteomics, metabolomics2,7 and spectroscopy.8,9 By deciphering high-dimensional correlations within these datasets, AI can accelerate high-precision simulations to elucidate structure–property relationships,10,11 further achieving more efficient predictions of highly anticipated targets.5,6,12,13 AlphaFold 2 (ref. 5 and 13) represents a groundbreaking advancement in protein structure prediction, utilizing deep neural networks and self-attention mechanisms to achieve high-precision results. The updated AlphaFold 3 (ref. 6) enables joint structure prediction of complexes, significantly enhancing the accuracy of biomolecular interaction modeling and offering transformative potential for drug design and disease diagnosis. In parallel, DeepMind developed the GNoME intelligent model14,15 for crystal structure prediction, which has expanded the number of known stable materials nearly tenfold to 421000. Beyond predicting material properties, the recommendation of synthesis strategies for targeted materials is also in high demand. For example, several AI-assisted tools for molecular synthesis have been developed to optimize experimental workflows, including AiZynthFinder,16 AIDDISON17 and Chematica (now known as SYNTHIA™).18 Cernak et al.19 conducted retrosynthetic studies on 12 COVID-19 antiviral drugs using SYNTHIA™, which identified simpler and more efficient synthesis routes for 11 of them, thus significantly alleviating pressure on existing supply chains. To date, the number of research fields integrated with AI are rapidly evolving and hold immense untapped potential.
An effective AI-driven approach relies on large amounts of high-quality, structured data as its foundation to develop robust prediction models. However, the majority of available data, particularly experimental data, suffers from significant issues such as non-standardization, fragmentation, and poor reproducibility.20,21 In this context, automated robotic platforms are being rapidly developed to generate high-quality experimental data in a standardized and high-throughput manner while minimizing manual effort.22–25 More importantly, these platforms can fully leverage their advantages when integrated with AI algorithms. Such integration not only automates routine tasks but also enables complex decision-making processes, optimization of synthesis methods, and even planning of experimental workflows.15,25–31 A pioneering study was conducted by Cooper et al.,27 who developed a mobile chemist capable of autonomously conducting high-throughput photocatalyst selection, outperforming humans through the application of Bayesian optimization. They further designed a fully autonomous solid-state workflow involving three multipurpose robots for powder X-ray diffraction (PXRD) experiments.28 The “Chemputer” system, developed by Cronin et al., integrates literature analysis, protocol customization, organic synthesis, and characterization, demonstrating extraordinary capability in automatic synthesis.29 The closed-loop self-driving laboratory, developed by the Aspuru-Guzik group, implements a design-make-test-analyze cycle to accelerate the discovery of new organic semiconductor laser materials.30 The A-Lab, developed by DeepMind,15,31 utilizes computational tools, literature data, machine learning, and active learning to plan and interpret the outcomes of experiments performed by robotics, addressing the challenges associated with handling and characterizing solid inorganic powders. Therefore, autonomous laboratories that integrate automated robotic platforms with AI are capable of conducting experiments that were once deemed unfeasible and thus may expand the frontiers of scientific exploration.
In this perspective, we first summarize the fundamental elements required for autonomous laboratories to satisfy the complex demands of autonomous experimentation. The discussion primarily focuses on the current state of autonomous laboratories in China, where development has progressed from simple iterative-algorithm-driven systems to comprehensive intelligent autonomous systems powered by large-scale models. It is worth noting that most autonomous laboratories are established to tackle specific challenges and operate in isolation, with limited inter-lab communication and data sharing. To this end, we explore the future prospects of these distributed autonomous laboratories, emphasizing the adoption of coordinated strategies, such as cloud-based systems, to achieve seamless data and resource integration across laboratories.
Multimodal data form the backbone of chemical science databases, encompassing information ranging from synthesis planning to property prediction. These data resources include structured entries from proprietary databases (e.g., Reaxys and SciFinder) and open-access platforms (e.g., ChEMBL33 and PubChem34), as well as unstructured data extracted from scientific literature, patents, and experimental reports. The extraction of unstructured data is extensively achieved using Natural Language Processing (NLP) techniques.35 Consequently, toolkits such as ChemDataExtractor,36 ChemicalTagger,37 and OSCAR4,38 which leverage named entity recognition (NER), have been developed for the extraction of chemical reactions, compounds, and properties from textual documents. Image recognition further enhances the robotic understanding of chemical diagrams and molecular structures.39 Together, these methods represent complementary approaches to converting unstructured data into formats directly usable by robotic systems.
Following data mining, databases are constructed by intelligent methods to efficiently store, manage, and facilitate the retrieval of processed data for subsequent analysis and decision-making. The processed data can be further organized and represented in the form of knowledge graphs (KGs), which provide a structured representation of data and have been widely applied in various domains. Canonical methods for KG construction primarily focus on extracting logical rules based on semantic patterns.40 With the advancement of AI, methods for KG construction based on large language models (LLMs) have recently gained widespread adoption, demonstrating superior performance and enhanced interpretability for human understanding.41,42 Furthermore, to address issues such as contextual noise and knowledge hallucination,43 in a recent study, a general KG construction framework, named SAC-KG, is proposed that leverages LLMs as skilled automatic constructors for domain KGs.44
Beyond GAs, the SNOBFIT algorithm49 improves search efficiency by combining local and global search strategies. It has been successfully applied to optimize chemical reactions in continuous flow reactors.50 Another widely used method in autonomous laboratories is Bayesian optimization, which minimizes the number of trials needed to achieve convergence.27,31,51–53 The performance of Bayesian optimization is highly dependent on the choice of the surrogate model, with Gaussian processes (GPs) and random forests (RFs) being the most common for regression tasks.20,54 The Phoenics algorithm, based on the Bayesian neural network (BNN), achieves faster convergence than GPs and RFs.55 It has been integrated into ChemOS (a versatile software package) for several automated platforms, including the Ada self-driving laboratory for thin-film materials,53 and a mobile robotic chemist by Burger et al.27 for optimizing aqueous photocatalysts.
Alongside advanced algorithms, another critical aspect of optimizing workflows for intelligent models is the iterative experiment-theory feedback loop. Automated theoretical calculations, such as density functional theory (DFT),56,57 provide valuable prior knowledge and bridge the gap between theory and experiment. This data fusion enhances adaptive learning capabilities, allowing models to continuously update and refine their predictions. As a result, these intelligent models drive the development of closed-loop iterative automation processes.51,52
Once experimental protocols are received via an API, robotic systems execute the required actions with high precision. Autonomous experimental robots are equipped with advanced capabilities to perform complex tasks independently. They typically feature dexterous robotic arms (single27,52 or dual24,59) with a high degree of freedom for precise manipulation, mobile platforms for enhanced versatility, and sensing systems such as IR projectors (for depth sensing),51 laser scanners (for point cloud generation),27 and RGB sensors (for object recognition)51,52 to achieve accurate perception. To navigate and operate efficiently in dynamic environments, these robots employ high-precision localization and mapping methods, including SLAM,27,51 six-point localization (for pose determination),27 ArUco labels (for visual marker tracking)52 and so on.
Robotic systems are seamlessly integrated with automation software and real-time feedback mechanisms, enabling the optimization of experimental workflows and significantly enhancing reproducibility.60,61 When combined with automated workstations, these systems can execute complex, dexterous experiments and manage entire workflows—spanning synthesis, characterization, and testing—with minimal human intervention. For instance, Lunt et al.28 designed a solid-state workflow incorporating a PXRD instrument, two grinding stations, and a Chemspeed liquid dispensing platform to conduct PXRD experiments efficiently. Similarly, Cooper et al.27 developed a robotic system with eight workstations to identify photocatalyst mixtures for hydrogen production, ultimately achieving formulations six times more active than the initial ones. Further advancing the field, Zhu et al.52 implemented a system with fourteen workstations, featuring dedicated regions for auto-synthesis, auto-characterization, and auto-performance testing. This comprehensive system enables fully automated experimental processes, accelerating catalyst discovery and optimization.
Once the instruction sets are transmitted, large-scale models and advanced algorithms can efficiently steer the decision-making system, enabling the optimization and seamless coordination of experimental workflows for greater efficiency and adaptability. After automated experimental platforms generate vast amounts of data, effective data management becomes crucial to ensure both the integrity and usability of the data. Cloud computing infrastructures and big data techniques have emerged as viable solutions for storing and processing large datasets, offering the flexibility and scalability necessary to handle extensive volumes of information.66 Additionally, techniques such as dimensionality reduction and anomaly detection can be applied to reduce the size of datasets while emphasizing valuable data points, thereby facilitating more efficient data analysis.54
The graphical user interface (GUI) serves as a user-friendly, interactive way to control experimental workflows and communicate with the decision system for researchers, as seen in ChemIDEs.29 This interface simplifies the complexity of the underlying processes, providing an intuitive way to visualize data, monitor ongoing experiments, and access real-time analytics.52,64,67 Additionally, the GUI enhances collaboration and reproducibility through features such as experiment logging, protocol sharing, and version control. This enables researchers to quickly assess the status of experiments and make informed decisions without needing to dive into technical details.
Recent advancements in automation technology and the rapid development of AI have significantly accelerated the generation of experimental data. Furthermore, these advancements have enabled the screening and optimization of reaction conditions, facilitating the creation of machine learning models that can predict reaction yields. Mo et al. constructed an automated system for high-throughput thin-layer chromatography (TLC) analysis (Fig. 2b). By utilizing a large amount of data collected under standardized conditions, they built a machine learning (ML) model that associates the structure of organic compounds with their polarity (reflected by the retention factor (Rf)). This model can accurately predict the polarity of organic compounds in various solvent combinations, providing effective guidance for selecting purification conditions and quickly generating and analyzing high-quality TLC data.70 Xu et al. used a self-built high-throughput automated platform to screen a series of metal catalysts and solvents, discovering that [Ir(COD)Cl]2 can achieve the first selective cross-dimerization of sulfonamides, with high yield and good stereoselectivity.71 Additionally, through a comprehensive exploration of the reaction space (600 reactions), they developed a ML model (XGB-MAF) that can predict reaction yields, demonstrating the utility and generalizability of this iridium-catalyzed cross-dimerization method. Fang et al. developed a fully automated system that integrates high-throughput catalyst synthesis, online spectral detection, and photocatalytic reaction condition screening. The system utilizes liquid-core waveguide (LCW) technology to design and build a novel microfluidic photocatalytic microreactor, which can complete ultrafast photocatalytic reactions in seconds and achieve ultra-large-scale screening of up to 10000 reactions per day, providing solid data support for AI applications.72
Additionally, the ongoing integration of machine learning models with automated experimental platforms has enabled the controllable synthesis and reverse design of materials. Zhao et al.73,74 developed a robotic platform capable of controllably synthesizing colloidal nanocrystals with unique physicochemical properties (Fig. 2c). This platform automates the synthesis, in situ characterization, and external validation using initial synthesis parameters determined through data mining of existing literature. This makes it possible to precisely synthesize nanocrystals with the morphologies that are required. Furthermore, they achieved reverse design of colloidal nanocrystal morphology by discovering connections between morphology and structure-directing agents through the training of ML models on an ever-expanding experimental database. Jiang et al.75 reported an AI-guided robotic chemist capable of independently completing the entire process of constructing, characterizing, and testing chiral films. Through experimental absorption spectra and structural/process parameters, a ML model capable of accurately predicting chiral optical activity was constructed, along with an inverse design ML model that can generate chiral films with target chiral optical properties covering the entire visible spectrum (Fig. 2d). This expands the potential of using AI-Chemist to discover and optimize new materials. However, for ML model-driven automated platforms to achieve accurate predictions and reverse design of materials, they often require a large amount of reliable data as training sets, which to some extent limits the application and promotion of this method.
First-principles computational simulations can obtain microscopic information that is difficult to acquire in the real world, such as adsorption energy and electronic structures. The macroscopic properties of materials often depend on their microscopic characteristics. Therefore, combining machine learning models with first-principles calculations can provide pre-trained models and theoretical support for experiments, thereby guiding the experimental process and accelerating the iteration of materials. Yin et al.77 established an approach that utilizes ML-accelerated theoretical calculations, enabling collaboration between experiments and theory for screening small-sized ordered alloy catalysts. By calculating the solubility and chemical ordering of a third metal element in a PtCo ordered alloy system, as well as the adsorption of related intermediates, they found that the introduction of Cu or Ni into the PtCo alloy is beneficial for the thermodynamic driving force to transition from disorder to order. In contrast, the introduction of Mn and Fe inhibits the disorder-to-order transition of the alloy. Moreover, the synthesized PtCoNi and PtCoCu alloys exhibited excellent oxygen reduction reaction (ORR) performance. This makes it possible to quickly discover potential ordered alloys with high thermodynamic driving force and good performance from the vast design space. Zhang et al.78 used high-throughput DFT calculations to obtain the formation energy Ef and surface stress εsurf of high-entropy intermetallic compounds (HEICs) with different compositions. Based on the obtained 538 DFT datasets, they used crystal graph convolutional neural networks to construct a ML model capable of predicting εsurf and Ef with high accuracy. By further calculating several chemical properties of HEICs, it was found that the difference in the atomic radius and mixing enthalpy were considered key chemical characteristics that respectively influence εsurf and Ef and are expected to become new descriptors for developing HEICs with excellent ORR performance. Wang et al.,79 based on the adhesion energies of 178 metal-oxide interfaces obtained from experiments and 14 highly independent and important physical features obtained through symbolic regression and cross-validation, employed an interpretable ML model to conduct a comprehensive search of over 30 billion mathematical expressions. This led to the development of a physical model that can describe the metal-support interaction (MSI) and accurately predict the adhesion energy and contact angle of metal-oxide interfaces (Fig. 3a). Furthermore, through extensive experiments involving 10 metals and 16 oxides, they formulated and validated principles for the strong metal–metal interactions that occur during encapsulation. These theories have greatly advanced the design and development of supported metal catalysts. Li et al.,80 based on the linear scaling relationship of energy during the sintering process of metal nanoparticles (NPs), obtained a representative set of 323 metal-support pairs. By simulating the sintering kinetics of these metal support pairs, they discovered that the sintering kinetics exhibit a Sabatier principle with respect to MSI. Both excessively strong and weak MSI can lead to the sintering of metal NPs. They also found that the sintering initiation temperature of metal NPs with appropriate MSI is about half of the bulk metal melting temperature of typical NPs (∼3 nm), which is consistent with the long-reported empirical Tammann temperature. In addition, based on the revealed Sabatier principle and scaling relationships, high-throughput screening of carrier combinations with different energies was conducted, resulting in carriers that increase the sintering temperature. This has greatly advanced the design of ultra-stable supported metal NP catalysts.
![]() | ||
Fig. 3 Some ML models and autonomous platforms driven by a computational ‘brain’. (a and b) ML models driven by theoretical calculation. (c and d) All-round AI-chemistry laboratory. |
In addition to first-principles calculations, the material structures and properties obtained through spectroscopic characterization can also serve as prior knowledge in optimization procedures. Wang et al.81 proposed a method to establish a connection between surface–adsorbate interaction characteristics and spectral signals through an ML approach (Fig. 3b). By using the infrared and Raman signals of carbon monoxide and nitrogen monoxide adsorbed on metal surfaces as descriptors, important characteristics including adsorption energy and charge transfer degree were quantitatively determined, with good accuracy and transferability. This significantly broadens the application range of traditional in situ spectroscopic techniques in high-throughput screening. Li et al.82 proposed an ML model that uses infrared spectroscopy to monitor the evolution of adsorbate–surface interaction behavior. Taking the C–C coupling process in catalytic reactions as an example, the convolutional neural network was used to identify and extract spectral features, depicting the atomic structure and chemical interactions in the catalytic system. This resulted in obtaining key energy barriers and corresponding structural information, and the predicted promotion trend of CO–CO dimerization closely matched previous literature, demonstrating the ability to accurately track dynamic transformations of metal surfaces. It highlights the practicality and versatility of this machine learning model in tracking the evolution of complex structures. Zhang et al.83 proposed a machine learning descriptor of Chemical Information Molecular Graph (CIMG) to represent chemical reactions. The CIMG constructs a structured graph by encoding nuclear magnetic resonance (NMR) chemical shifts as vertex features, bond dissociation energies as edge features, and solvent/catalyst information as global features. The method based on the CIMG can effectively predict and recommend full synthesis routes for catalysts/solvents, representing a novel data-driven approach to automated retrosynthesis planning that does not rely entirely on historical synthesis data. Cui et al.84 quantitatively predicted how various electric fields would affect catalytic performance using the vibrational spectral signals of carbon dioxide adsorbed on metal single-atom catalyst molecules as descriptors. The adsorption patterns and energies of carbon dioxide molecules on 27 distinct metal single-atom catalysts in varied orientations and at varied intensities were theoretically investigated using metal-doped graphitic C3N4 (g-C3N4) catalysts as an example. In order to measure the facilitative effect of the electric field on CO2 catalytic conversion, a spectral characteristic model was developed using ML techniques to associate infrared/Raman spectral descriptors with adsorption energy/charge transfer. In the meantime, inverse prediction of electric field strength from spectra was achieved by mining catalytic insights into the link between spectra and adsorption patterns based on the attention mechanism. This study introduces a novel quantitative method for controlling electrocatalytic reactions and monitoring spectra using machine learning.
Completely automated systems powered by intelligent brains have emerged as a result of the ongoing integration of AI and theoretical computations. A pioneering effort in this field was undertaken by Zhu et al.,52 who built an all-round AI-chemistry laboratory. The architecture of the AI-Chemist consists of three modules, including a machine-reading module to extract chemical knowledge from literature, a mobile robot module to perform experiments, and a computational brain module to generate physics/theory-based predictive models. Therefore, this system can achieve a closed-loop iterative process of reading relevant literature, conducting theoretical calculations to form preliminary experimental plans, designing experimental plans, executing automated experiments, analyzing the obtained experimental data, training machine learning models, and making decisions to generate new plans (Fig. 3c). This greatly reduces the time human chemists spend on experiments, changing the way new materials are discovered and manufactured. The same team85 expanded on this work by demonstrating a robotic AI chemist for intelligent optimization and automated synthesis of oxygen evolution reaction (OER) catalysts made from Martian meteorites. Martian ore pretreatment, synthesis and characterization testing of catalytic materials, and iterative catalytic formula tuning were all carried out without the need for human interaction. The system determined the ideal catalyst formula from more than three million potential compositions using an ML model trained on first-principles calculations (nearly 30000 theoretical datasets) and experimental observations (243 experimental datasets) (Fig. 3d). With a low overpotential of 445.1 mV and stability for more than 550
000 s at a current density of 10 mA cm−2, the improved catalyst demonstrated exceptional performance. Even in extraterrestrial settings, this work demonstrates the promise of AI-driven systems for automated chemical synthesis and materials discovery.
![]() | ||
Fig. 4 Some intelligent management systems and end-to-end intelligent autonomous platforms. (a) SynAsk. (b) MAOS. (c) LLM-RDF. (d) ChemAgents. |
Moreover, a complete chemical experiment workflow typically includes three stages: synthesis, characterization, and performance testing. To create an end-to-end intelligent automation platform capable of handling experimental protocols generated by large-scale models, an intelligent management decision system for scheduling instruments, analyzing feedback data, and optimizing synthesis plans is essential. The design, architecture, and hardware/software systems of a robotic AI-chemist platform that combines chemical synthesis, characterization, and performance testing were published by Xiao et al.87 Its ability to substitute a human chemist in actual experimental operations is demonstrated by the fact that the robotic AI chemist was trained to perform photocatalysis experiments. Similarly, a materials acceleration operating system (MAOS) with a distinct language and compiler architecture was created by Li et al.88 For autonomous materials synthesis, properties research, and self-optimized quality assurance, the MAOS combines virtual reality (VR), cooperative robots, and a reinforcement learning (RL) scheme. Following VR training, the MAOS can function on its own, saving money on labor and time (Fig. 4b).
Ultimately, with the continuous development of intelligent large-scale models and decision-making systems, end-to-end intelligent platforms driven by large-scale models are gradually being realized. To illustrate the adaptability and effectiveness of LLM-based agents throughout the whole chemical synthesis process, Ruan et al.89 established a unified LLM-based reaction development framework (LLM-RDF) (Fig. 4c). They demonstrated how LLM agents can support end-to-end synthesis development by using aerobic alcohol oxidation to aldehyde—an emerging sustainable aldehyde synthesis protocol—as a model transformation. Using state-of-the-art LLM technology, this work presents a feasible route toward autonomous end-to-end chemical synthesis. Furthermore, a hierarchical multi-agent system dubbed ChemAgents, which is based on an on-board Llama-3-70B LLM, powers a robotic AI chemist, according to Song et al.90 With little assistance from humans, this system can carry out intricate, multi-step studies. It functions by means of a task manager agent that communicates with human researchers and manages four specialized agents: the robot operator, which controls a cutting-edge automated lab; the experiment designer, which makes use of a vast protocol library; the computation performer, which makes use of a flexible model library; and the literature reader, which accesses a comprehensive literature database (Fig. 4d). A major step toward completely automated chemical discovery is made possible by the combination of various agents and resources, which allow the system to plan, carry out, and optimize experiments on its own.
Once scientists submit requests for material innovation, advanced scientific large models intelligently recommend research strategies and preparation solutions, including candidate materials and synthesis schemes. Human-machine collaborative systems should be developed to optimize the analysis of scientific problems through cognitive intelligence, enabling scientists to further refine and optimize experimental plans. Guided by these plans, robotic experimental cloud facilities conduct high-throughput experiments, while high-throughput computing platforms perform theoretical simulations. Notably, to standardize robotic experimental systems, it is essential to establish and promote standardized protocols for instruction sets, interface functions,91 experimental templates, and intelligent equipment.92 This process drives robotic experimental systems and computer simulations, generating high-quality, multi-domain, multi-modal, and standard data that are fed back into AI models for optimization and refinement. Driven by multimodal large models, the system iteratively optimizes processes, integrating aligned theoretical and experimental data into comprehensive scientific big data. Based on these data, knowledge and logic enhanced models are trained, combining scientific expertise with machine learning techniques to predict global optima and optimize material creation and problem-solving.
Additionally, communication and data sharing between laboratories require not only technical compatibility across various software and hardware platforms but also adherence to data privacy, security, and regulatory policies. Open-source frameworks such as LabTwin, DigCat (Digital Catalysis Platform) and the EU's “AI-on-Demand” initiative serve as pioneering examples of secure data sharing on cloud platforms. Therefore, the development of an intelligent scientist system must incorporate secure data-sharing technologies—such as blockchain or federated learning—to enable lawful and protected resource exchange, thereby reducing barriers to interdisciplinary and cross-domain collaboration. At the same time, specialized domain-models, targeting specific scientific challenges, are developed from model training results and can be securely and commercially shared via cloud platforms, fostering the growth of an intelligent scientist system.
The concept of an intelligent scientist system envisions the establishment of centralized platforms that consolidate and analyze extensive datasets, develop advanced intelligent models, and refine scientific methodologies and technologies. These platforms, serving as the intellectual nucleus of the system, will orchestrate a network of distributed innovation facilities, supporting scientists in achieving specific, targeted scientific breakthroughs. This integrated framework will fundamentally transform the form of scientific research by combining the centralized, resource-intensive development of scientific intelligence with decentralized, localized experimental operations that drive innovation. Such a structure will lower the barriers to interdisciplinary and cross-domain collaboration, enabling researchers and scientists across both academia and industry to engage in highly specialized experimentation and personalized scientific inquiry.
Term | Definition |
---|---|
Sabatier principle | States that the best catalytic activity is obtained when the interaction between catalysts and reactants is balanced |
Symbolic regression | A regression analysis method based on symbolic expressions |
Blockchain | A decentralized data storage and transmission technology |
Federated learning | A decentralized machine learning technique that allows several people to work together to build a common model without sharing local data |
Footnote |
† These authors contributed equally: J. L., C. D., and D. L. |
This journal is © The Royal Society of Chemistry 2025 |