Open Access Article
This Open Access Article is licensed under a
Creative Commons Attribution 3.0 Unported Licence

Technological trends in medical robotic sensing with soft electronic skin

Yiru Zhou ab, Yao Tang ab and You Yu *ab
aSchool of Biomedical Engineering, ShanghaiTech University, Shanghai, China. E-mail: yuyou@shanghaitech.edu.cn
bState Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China

Received 25th October 2023 , Accepted 20th December 2023

First published on 22nd December 2023


Abstract

Medical robotic sensing is a developing field that combines mechanical technology with medical engineering. Medical robots can play important roles in healthcare monitoring, prosthetic assistance, early diagnosis, and surgical operations. However, the increasing demand for medical robots comes with the need for greater sensing abilities, accurate control, and low-cost manufacture. Accordingly, this review highlights the recent progress in sensing technologies for medical robotic applications, including physical sensing, chemical sensing, and biological sensing. Subsequently, printing technology and human–machine interfaces are described. Our aim is to highlight the potential of medical sensing robotics and inspire further innovation.


image file: d3sd00284e-p1.tif

Yiru Zhou

Zhou is currently pursuing her MSc in Information Science and Technology at SHTU. She received her B.S. Degree in Information Science and Technology from SHNU in 2022. Her research interests include wearable bioelectronics and soft electronics.

image file: d3sd00284e-p2.tif

Yao Tang

Tang is currently an undergraduate student in the School of Biomedical Engineering at ShanghaiTech University. His research interests include implantable bioelectronics and human–machine interface.

image file: d3sd00284e-p3.tif

You Yu

You Yu is an Assistant Professor in the School of Biomedical Engineering at ShanghaiTech University. He received his PhD form the Changchun Institute of Applied Chemistry, Chinese Academy of Sciences. After that, he worked as a Postdoctoral Research Associate in the Department of Medical Engineering at the California Institute of Technology. His research is focused on the development and application of wearable bioelectronics, flexible biofuel cells, and medical robotics.


Introduction

Receiving external information is very important and necessary for robotic intelligence. Through integrated sensors, robots can receive information about the external environment, analyze and process the data, and then exhibit the corresponding feedback, such as display-action-respond model. In the medical field, doctors can insert a laparoscope into the human body through tiny incisions1 and carry out surgery inside the patient's body with the help of surgical robots, where smaller incisions mean less bleeding and less physical damage, lower surgery cost,2,3 and faster recovery.1 Furthermore, with the integrated sensors, surgeon can observe the situation in vivo and identify abnormal changes during the surgical procedure.2–4 In addition, assistive robots with sensing functions can also provide individualized adjustment and feedback compared to traditional prostheses. Over the last few decades, robotics has further spearheaded the development of research in various medical fields.5,6 However, the majority of robotics are still limited to employing simple sensors inside a joint for force perception only. Alternatively, the incorporation of electronic skins (e-skins) that are informative to not only robotics themselves but also the surroundings opens up opportunities for the next generation of smart robotics.

However, integrated sensors on e-skin are associated with several challenges given that the sensors have to work properly during stretching, bending, and deforming together with the robotics operation. In the last few decades, soft robotics has emerged, which requires sensors with stretchability and softness.7,8

In contrast with sensors on a hard platform, sensors on the soft e-skin suffer from nonlinearities and non-repeatable results due to traditional fabrication processes. Thus, to address these issues, specific methods have been introduced for the fabrication of sensors including 3D printing, inkjet printing, laser ablation and textile weaving.

On the other hand, e-skin with multimodal sensors is crucial to achieve the goal of enhancing the robotics awareness of circumstances, such as pressure,9–11 temperature,12 pH,13 warfare agents, and biohazards. With the help of e-skin, robotics can perform more complex tasks in various fields, such as medical surgery, prosthetic equipment, and even pandemic research.14–16 COVID-19 was spread globally, and thus the detection of this virus without the risk of exposure to medical staff became an urgent topic. In this case, the use of a medical robot in place of medical staff will not only improve the efficiency and reduce the work pressure but also greatly reduce the risk of further spread. Similarly, there are often potential threats in agriculture and industrial production,17,18 such as the detection of residues of organophosphorus fertilizers and microorganisms in the environment.19,20

This review aims to summarize the recent advances in integrating a variety of sensors that can be mounted on robots and soft devices. These advances address the critical issues related to robotic sensing and human–machine interaction (HMI) in smart robots. In the following sections, we describe in detail the sensing mechanism, materials, and fabrication technologies for robotic e-skin. Firstly, we introduce the physical sensors on robotic e-skin, which are the most common and diverse, where efforts are being devoted to improving the sensitivity and accuracy of sensors by changing their shape and structure. Subsequently, we review the chemical sensors and biological sensors for e-skin, describing the specific detection of hazardous chemicals and biomolecules in the environment. Besides, we also present the HMI through e-skin with a variety of algorithms such as machine learning and deep learning, and advanced fabrication technology for soft e-skin with multimodal sensors (Scheme 1).


image file: d3sd00284e-s1.tif
Scheme 1 Components and creation of medical robotic sensing.

Physical sensors on robotic e-skin

The sense of touch is a very important part of human perception of the outside world. In fact, the sense of touch includes many aspects, such as pressure, surface roughness, temperature, and humidity. Naturally, physical information is important when a robot is trying to provide the perception of the outside world.

Pressure sensors are one of the most common types of physical sensors. There are various ways to achieve pressure information through signal conversion. In this case, resistive pressure sensors are widely used, which obtain the variation in pressure by measuring the resistance changes given that force affinity forms a larger connection, resulting in a change in resistance. This type of sensor has a very fast signal response and stable performance. However, flexible resistive pressure sensors are affected by their stretchable and bendable characteristics. Thus, to enhance their performance, various materials have been applied in their production, such as graphene,21 molecular perovskite (TMFM),22 MXenes,23–25 and polyvinyl alcohol (PVA) fibers.26 For instance, a resistive pressure sensor was fabricated using a polyimide (PI) substrate with spiky-structured polyurethane (PU) combined with MXene nanosheets, which were spray-deposited.23 The complex materials and specially designed structure (Fig. 1B) exhibited high sensitivity (Fig. 1C) and self-healing capabilities. Considering that sensors are prone to being damaged during continuous operation, in this case, the sensitive layer could autonomously “cut and heal” after 18 h, significantly prolonging its lifespan. Thus, self-healing is a significant function for the persistence of sensors.27 Due to its high sensitivity, the robotic arm with this resistive pressure sensor could grip delicate and fragile objects such as balloons and tofu (Fig. 1A). In addition to resistive pressure detection using irregular contact surfaces formed on the surface of the material, pressure sensors have been specially designed to improve the sensitivity of pressure detection with regular shapes.28 Besides, we can also use the change in resistance value as a piezoresistive sensor and a temperature sensor simultaneously.


image file: d3sd00284e-f1.tif
Fig. 1 Schematic and sensing property of piezoresistive sensors and capacitive sensors. (A) Fully sprayed MXene-based pressure sensor on the palm is used to catch tofu and a balloon. (B) Schematic diagram of the working mechanism of the fully sprayed MXene-based pressure sensor. (C) Sensing properties of the fully sprayed MXene-based pressure sensor. (D) Photograph of the fabricated sensing device. (E) Schematic diagram describing the operational principle of the INM-based sensor before and after pressure application. The insets show the magnified illustration of the internal processes occurring in the sensing membrane. Initially, ([Li+][TFSI]) ion pairs are confined on the MXene surface by forming H-bonds with the MXene functional groups. Under pressure, these ion pairs are detached from the MXene surface due to ion pumping to produce a thick EDL. (F) ΔC/C0 response plotted as a function of time during ultralow to low-pressure (25, 50, 100, 200, and 400 Pa) loading/unloading cycles. (G) Photograph of a fabricated e-skin and close-up view of the hills and electrodes (inset). (H) Schematic diagram of the working mechanism of a capacitive pressure sensor. The 3D hill structure allows for different deflection capabilities on the top and around the hills, thus differentiating capacitive responses to a pressure event from different directions. Black lines are side views of the electrodes. (I) Schematic diagram of the advantage of the concept of capacitive pressure sensors.

Another common way to sense pressure is to calculate the change by capacitance.29–35 Generally, a common capacitive pressure sensor consists of three layers, i.e., a conductive layer on top and bottom sides and an insulating elastomer in the middle layer as a sandwich structure. The application of force results in a change in capacitance, which may be caused by two reasons, i.e., a change in the state of the electrodes or the distance between the top and bottom sides. Many different materials have been optimized to realize higher sensitivity and larger detection range. For instance, a type of highly responsive capacitive pressure sensor utilized a hybrid ionic nanofibrous membrane as a sensing material, which was positioned between micro-structured PDMS electrodes coated with gold (Au/M-PDMS).29 This sensor could have both high pressure range and high pressure measurement accuracy (Fig. 1D and E), respectively. Also, the accuracy and durability of the measurements were verified (Fig. 1F). By assembling multiple pressure sensors into an array, it was possible to capture surface pressure changes in real time and estimate the pressure distribution at both the edge and center of the object. Moreover, there is a special structure in biological skin called a spinous process.30,31,36,37 A new type of capacitive sensor incorporated this spine structure.31 Because different forces exerted on a 3D map create different pressure changes (Fig. 1H), this structure had higher sensitivity, in addition to better ability to distinguish among normal, shear and tilt forces (Fig. 1I). When added to a robotic arm, this sensor could be used to interact with, grasp and place light and easily deformable objects such as ping pong balls and raspberries (Fig. 1G). Using graphene, another high-performance capacitive pressure sensor with high sensitivity (3.19 kPa−1), fast response (30 ms), ultralow detection limit (1 mg), tunable-sensitivity, high flexibility, and high stability was obtained.38 In addition, higher-performance capacitive pressure sensors can be obtained by changing the material between capacitors.39

In addition to pressure sensing, the detection of friction is also an attractive pathway for robotic sensing. Therefore, numerous flexible friction electric nanogenerators (TENG) have emerged.40–42 Novel e-skin with TENG can distinguish materials from objects with indistinguishable smooth surfaces more sensitively than human skin. Also, piezoresistive pressure sensors (PPS) and friction electric nanogenerators (TENG) have been prepared.43 A piezoresistive pressure sensor (Fig. 2A) employed a stabilized PEDOT:PSS thin-film resistor, creating additional conductive paths, which disassembled the total resistance to be composed of four aspects (Fig. 2B). Due to its ingenious structure and circuit design, it could sensitively capture pressure changes, and also change the sensitivity of the sensor by changing the resistance in the circuit to meet different usage scenarios. The TENG part was composed of a doped electrically conductive PEDOT:PSS layer with high specific surface area as the charged layer, and the ordinary PDMS was treated as the anti-friction layer. It could precisely recognize different bending angles (Fig. 2C). This composite sensor was verified to be able to recognize materials at different positions simultaneously in a 3 × 4 array.


image file: d3sd00284e-f2.tif
Fig. 2 Schematic and sensing property of a TENG, LC inductance sensor and optical waveguide sensor. (A) Schematic diagram of the proposed PTES for material perception. (B) Schematic diagram of the PTES composed of a PPS at the bottom and TENG at the top. (C) Output voltage as a function of applied pressure for three prepared TENGs based on a PEDOT:PSS-EM layer with different PEDOT:PSS concentrations. (D) Photograph of the skin-inspired tactile sensor. (E) Schematic illustration of the sensing mechanism used for tactile sensing. (F) Digital frequency as a function of applied pressure in the range of 0 to ∼1 kPa. (G) Photograph of the hand with waveguides shaking a human hand. (H) Schematic of a soft innervated finger in both unpowered (left) and powered (right) states and its cross section (bottom right corner). (I) Force-curvature curves for different objects detected from the bottom and top waveguides of the index. (J) Photograph of the fabricated STV. Scale bar = 3 cm. (K) Schematic diagrams of a projected STV and finite element analysis (FEA) cross-sectional results when tensile strain is not applied ε = 0 (top) and fully applied ε = εmax (bottom). (L) Chamber pressure Pch plotted against normalized strain ε/εmax for different supplied pressures Ps (n = 2, p = 10 mm, L0 = 80 mm).

Based on electromagnetic sensing technology, a pressure sensor with high sensitivity and low detection limit was proposed. It consisted of an independent membrane of polymer magnets and a magnetic sensor integrated in an air gap structure (Fig. 2D).44 When subjected to external force, the freestanding membrane deformed, thereby causing a change in the magnetic field. The inductive magnetic sensor detected the change in the magnetic field and formed an LC oscillating circuit with a capacitor (Fig. 2E). Subsequently, it was possible to calculate the magnitude of the applied external force by detecting the magnitude change (Fig. 2F). It utilized a giant magnetoimpedance (GMI) material, i.e., Co-based amorphous wire (CoAW), which has various advantages including high sensitivity of 500% Oe. The sensor was flexible in size and shape, making it easy to integrate into prosthetic limbs or smart robot surfaces of different shapes and sizes. Furthermore, the sensor exhibited a sensitivity of 4.4 kPa−1 (equal to 120 N−1) and detection limit of 0.3 Pa (equal to 10 μN) in the range of 0–1 kPa. Experiments showed that this sensor could accurately sense the mass of a drop of water and even the movement of a small worm with a mass of 0.8 mg. In addition, it could reflect pressure changes through electromagnetism, realizing the direct transformation of force stimuli into digital frequency signals, where the frequency increased with an increase in the external force. This is consistent with the normal reflection of external forces in living organisms, indicating the potential of this sensor to be applied in smart prostheses.

When light is transmitted in an elastic optical waveguide, some energy is radiated into the environment, and the more it becomes shaped, the more light energy is lost. Thus, an inherent drawback of optical waveguides is that they incur losses during conduction, but taking advantage of this can also allow for the detection of pressure changes through the loss of light.45,46 Specifically, a sensing waveguide is a columnar elastomer with a high reflectivity material on the inside and a low reflectivity material on the outside. Also, light-emitting diodes (LEDs) and photodetectors (photodiodes) are located at each end of the waveguide. One type of waveguide has a transparent polyurethane rubber interior material and a highly absorbent silicone composite material (ELASTOSIL 4601A/B) wrapped around its exterior (Fig. 2H).45 This sensor acquired pressure changes with sensitivity and low signal-to-noise ratio and could be scanned to obtain information about the surface characteristics of an object by moving it in a certain direction on the surface at a suitable pressure (ΔP = 100 kPa) (Fig. 2I). Thus, the robotic arm integrated with this optical waveguide pressure sensor (Fig. 2G) could reconstruct the shape of a mouse, and also sense and determine the ripeness of a tomato while ensuring that it was not damaged if it was soft. The pressure signal obtained through the optical waveguide was not only of the pressure magnitude, but also retained information about the direction of the force. The isolation between different sensors is fantastic. When three waveguides were placed in one finger, the top, middle and bottom, respectively, their functions were not same. The top and middle waveguides had an immediate response to the inflation of the finger, which is unique to show the proprioception of the robotic itself. Specifically, optical waveguides are anisotropic in force sensing. By cross-stacking the waveguides, normal and shear force data can be obtained more accurately.47 A robotic arm integrated with this type of sensor can perform the action of grasping a key and unlocking a door with relative excellence.

In the case of soft robotics, self-sensing is a complex and important function given that the state of the robotics is unknown when it is controlled by a valve. Accordingly, many methods have been developed such as the use of a waveguide45 and piezoresistance48 to determine the curvature of the chamber. One of the latest methods is the use of a soft material-based self-sensing tensile valve (STV) that is capable of self-sensing and proportional control of soft pneumatic actuators from a single, constant supply pressure.49 By changing the inlet and outlet channel, the soft robotics can be controlled freely, and by measuring the fluid pressure in the inlet channel and outlet channel simultaneously, the state of the chamber can be calculated (Fig. 2K). Also, by changing the STV setup conditions post fabrication, the chamber pressure curve can be further programmed. For instance, by changing the constant supply pressure, Ps, to different levels, the maximum chamber pressure, Pch, of the STV is adjusted accordingly at ε = εmax (Fig. 2L). Furthermore, by inversing the inlet and outlet of the STV, the airflow direction passing through the channels is reversed.

Human skin has 4 different mechanical sensors that sense static and dynamic mechanical stimuli. The static mechanical stimulus sensors acutely capture low-frequency stimuli, whereas the fast-adapting mechanical sensors can capture dynamic pressure or vibration. The former is mainly realized by pressure sensors, while the latter is realized by friction sensors. Friction sensors help in recognizing the surface texture of an object and detecting sliding characteristics (Fig. 3A). An easily fabricated friction sensor that can be bonded to cotton textiles was prepared using Teflon.46 Teflon has a stronger electronic attraction and excellent mechanical strength compared to other common friction electrical materials such as PDMS and PET. This sensor was essentially a single-electrode TENG that is well suited for tactile sensing (Fig. 3B). When the sensor comes into contact with a fringed object, Teflon, with its strong electron affinity, adopts a more negative charge, whereas the surface of the object adopts a positive charge. When the object is separated from the sensor, the potential difference between the two friction layers will gradually increase, and transient electrons will flow from the Cu electrode to the ground, generating an output voltage for the external load. When the sensor and the object are in contact again, electrons will flow from the ground back to the Cu electrode, generating the reverse signal. This sensor can be used to accurately classify materials through artificial neural network (ANN) training (Fig. 3C). Compared to the classification by machine learning methods,43 that by neural networks has higher accuracy.


image file: d3sd00284e-f3.tif
Fig. 3 Schematic and sensing property of friction sensors, synaptic transistors and sensors with compound function. (A) Schematic of the skin-inspired all-textile tactile sensors capable of multifunctional tactile sensing. The basic structure of human skin versus all-textile tactile sensors (bottom), where human skin has slow-adapting (SA) mechanoreceptors [Merkel discs (MD) and Ruffini corpuscles (RE)] for static stimuli, fast-adapting (FA) mechanoreceptors [Meissner corpuscles (MC) and Pacinian corpuscles (PC)] for dynamic stimuli. (B) Detailed structure of the textile tactile sensors. (C) Comparison of the peak-to-peak voltage of the triboelectric sensors in a relative contact-separation motion for different materials. Insets are the voltage signals measured by the triboelectric sensor in a contact-separation mode using Ecoflex (top) and PE (bottom). (D) Schematic diagram of a robotic palm fully covered by a synaptic transistor. (E) Schematic diagram of free-standing, flexible film with devices on fabricated on top. (F) Change in the firing rate with respect to the applied force. (G) Photograph of the fabricated device. Scale bar, 5 mm. (H) Illustration of the sensor comprised of two sinuous Pt ribbons covered by a PDMS membrane, which are fabricated on a flexible PI substrate. (I) Pressure outputs from 0 to 50 kPa under different temperatures from 25 °C to 50 °C. The inset shows the relative errors. Loading run, open symbols and unloading run, filled symbols. (J) Photograph of the fabricated device. (K) Detailed structure of the sensor. (L) Simultaneous and independent detection of environment temperature (top) and contact pressure (bottom).

Synaptic transistors are another way to mimic electronic skin. Obviously, this type of sensor is realized by mimicking the synapses found in living organisms, which is a major breakthrough compared to simply recording pressure through transistors alone.50 When a current pulse is applied to a synaptic transistor, current transfer occurs in it, and subsequently the current pulse is passed on in steps.51 Recently, large-area, array-distributed synaptic transistor sensors were achieved by printing ZnO NWs with highly uniform (synaptic) transistors (Fig. 3D and E).52 A major problem with the application of conventional transistors in flexible smart robots is that the bending of these devices results in significant changes in their performance. However, after testing, this ZnO NW-based transistor sensor showed no significant performance changes during device bending, demonstrating its stability during mechanical deformation (Fig. 3F). This sensor directly simulates the synapses in biological skin, and subsequently the data obtained from the synapses can be directly used for neural network simulations to obtain certain conclusions.

To further improve the sensor performance and reduce the crosstalk between different sensors when combining them in multiple layers, multimodal designs that combine several different sensors have been developed, in addition to physical sensors that are designed separately and independently. However, the integration of multimodal sensors generally requires complex fabrication given that different sensing mechanisms are not very perfectly compatible. A sensor was cleverly designed by utilizing the thermoresistive and thermoelectric effects to combine pressure and temperature sensors.53 This design enabled the simultaneous independent detection of pressure and temperature (Fig. 3G), where the bimodal sensing of pressure and temperature stimuli plays an important role in artificial electronics and human health monitoring. This sensor consisted of a combination of two different temperature-sensitive sensors and a porous elastomer employing a constant-temperature-difference (CTD) feedback circuit to collect temperature and pressure data independently (Fig. 3H). The two temperature-sensitive resistors were made of platinum (Pt), where the smaller temperature-sensitive resistor was made by a larger temperature-sensitive resistor surrounded by two resistors and the thermal conductivity was detected based on the size of the pressure changes in the PDMS elastomer. The peripheral temperature-sensitive resistor itself had a kilo-ohm resistance, its own Joule heat was negligible, and it could be used to detect the ambient temperature. Alternatively, the center of the temperature-sensitive resistor exhibited a resistance of only about 100 ohms, where the heating of the electric power to a higher temperature than the ambient temperature resulted in heat conduction. The porous structure of the elastomer caused the thermal conductivity to increase under external pressure, which reduced the temperature of the center band, and thus the pressure could be converted into a voltage change on the CTD circuit. This sensor can be used to minimize the size of a single sensor by reducing the size of the center sensor and the heating temperature, thus achieving a higher spatial resolution (Fig. 3I). Consequently, this design can be used to sense not only the temperature of water, but also measure wind stimuli.

Another sensor was prepared integrating four different data detections, which consisted of two sensing layers sandwiched between PDMS doped with porous silver nanoparticles (Fig. 3J).54 Each sensing layer included two sensing elements made of concentric rings of chromium/platinum (Cr/Pt). The center sensing element was electrically heated to a higher temperature and became a hot film, and the peripheral annular sensing element became a cold film. The detection of pressure by this sensor was also realized by detecting the changes in the thermal conductivity of PDMS. The upper and lower hot films had different temperatures, and there was a temperature difference, and when subjected to pressure, the thermal conductivity of PDMS changed, and the temperature of the bottom hot film was reduced more. Simultaneously, when an object touched the upper sensing layer, the upper thermal film could detect the thermal conductivity of the object, and the double-layer sensing structure made the detected thermal conductivity more accurate. The upper and lower cold films were used to detect the object temperature and the ambient temperature, respectively (Fig. 3K). Experiments demonstrated that this sensor has high sensitivity in detecting pressure and temperature (Fig. 3L). Also, it was demonstrated that the robot arm integrated with this sensor could discriminate between objects of different shapes and sizes or materials when gripping them.

Chemical sensors on robotic e-skin

In daily life, people are potentially exposed to hazardous chemicals in many fields, such as industrial production, polluted environment, and agricultural protection. Additionally, in recent years, there have been many incidents of hazardous chemical leakage worldwide. Therefore, the development of advanced chemical detection robots is extremely urgent, which can replace human beings in dangerous circumstances, avoiding the exposure to hazards. By rationally deploying chemical sensors on robots so that the detection of chemical concentrations is not limited by spatial location, the range of sensor use will be effectively increased. The chemical sensors based on electrochemistry involves the conversion of a chemical signal into an electrical signal, which can be detected by electrodes, where the electrons from redox reactions transfer to the electrodes. Electroanalytical techniques can be miniaturized without any loss in sensitivity, and thus play an important role in robotic sensing. For example, nitroaromatic explosives (NAEs) are hazardous chemicals that pose a threat to human health and safety, and thus require the reasonable deployment of sensors that can quickly recognize trace concentrations of NAEs on site. In addition, in the agricultural sector, the use of toxic compounds must be strictly regulated, such as organic phosphates (OPs) as the main component of pesticides. Also, chemical nerve agents need to be regulated because they can lead to neurological diseases, infertility and even reduce life expectancy. In medicine, chemical sensors are equally important for studying the role of some key metabolisms in the human body.

Accordingly, for the detection of chemical hazards, an artificial intelligence (AI)-driven human–robot interaction multimodal sensing robot system (M-Bot) was proposed.55 The standard chemical explosive represented by TNT or OP nerve agent simulant (paraoxon-methyl) present on the surface of an object was contacted and detected by e-skin on robotics (Fig. 4A). The detection of TNT was realized by electrodes modified by Pt nanoparticles/graphene (Fig. 4B), which have excellent electrocatalytic properties compared to conventional carbon and simply graphene electrodes. The detection principle is that the conversion of p-NO2 to p-NH2 is catalyzed and detected by negative differential pulse voltammetry (nDPV). The sensitivity of the sensor reached 0.95 μA cm−2 ppm−1, with a detection limit of 10.0 ppm. When TNT was detected using this electrode integrated into a robotic arm, accurate and stable results was obtained within 3 min through gelatin-based hydrogels. Besides, a gold nanoparticle electrode modified with a Zr-based metal–organic framework (MOF-808) was employed for the detection and analysis of organic phosphates. The non-enzymatic reduction of OP was detected by nDPV with a detection sensitivity of 1.4 μA cm−2 ppm−1 and a detection limit of 4.9 ppm (Fig. 4C). Similar to the TNT detector, the dry-phase OP analysis integrated into the robotic arm could be completed within 3–4 min.


image file: d3sd00284e-f4.tif
Fig. 4 Schematic and sensing property of chemical sensors. (A) Photographs of the robotic skin-interfaced e-skin-R consisting of arrays of printed multimodal sensors. Scale bar, 3 cm. (B) Schematic of printed Pt-graphene electrode for the detection of TNT. (C) Dynamics of robotic fingertip detection of dry-phase TNT using a Pt–graphene sensor. (D) Schematic of the soft implant for sensing neurotransmitters in the brain and 3D schematic showing the composite materials made by confining nanoscale graphene/iron oxide nanoparticle networks in an elastomer (SEBS) to construct a soft, sensitive and selective neurochemical sensor. (E) Left, schematic setup for in situ characterization of the graphene mesostructure under strain. Middle, X-ray tomography 3D reconstruction of the graphene–elastomer composite showing the mesostructure of the graphene nanofibre networks at 0% (upper) and 100% (bottom) strain, respectively. Right: Top view of the graphene tomography. The scale bars denote 5 μm. The μ-CT scan results were repeated and reproduced three times. (F) Concentration-dependent calibration response of NeuroString electrode to DA ranging from 10 to 200 nM in PBS buffer (pH 7.4) with a scan rate of 400 V s−1. (G) Steps of pesticide detection with the chemical sensors on the dual-functionality glove. (H) Principal operation mechanisms of the chemical sensor on the dual-functionality glove. (I) Square-wave voltammograms at different MPOx concentrations in 0.1 M phosphate buffer solution, with the background current subtracted from the signals.

Given that the detection of organophosphates is generally common in agriculture, some sensors for organophosphates have been designed in wearable devices that can be attached to the skin56 and worn on the finger57 for real-time alerts. However, in practice, people are still exposed to the risk. In this case, a better measure is to isolate the sensor from human skin58 or even eliminate the possibility of contact. Haptic feedback is needed during contact to avoid damage to the robot or other contact objects from excessive force. Another type of detection of organophosphates was integrated into a robotic arm together with a pressure sensor, and the system could simultaneously detect pressure and organophosphorus pesticides separately.59 The chemical sensor in this system (Fig. 4D and E) was fabricated by immobilizing an organophosphate hydrolase (OPH), which is highly specific for organophosphorus compounds. Employing the square-wave voltammetry (SWV) or amperometric technique, organophosphate could be detected after it was catalyzed to p-nitrophenol products (Fig. 4F). The choice of SWV also further improved the selectivity for OP threat field screening. In addition, electrochemical detection methods have made continuous detection possible.

In addition to converting chemical signals into electrical signals reflecting the content of chemical substances by electrochemical methods, surface-enhanced Raman spectroscopy (SERS) can be utilized for detection through image recognition by converting chemical signals into physical signals. A new multifunctional platform was fabricated by homogenously coating one-dimensional (1D) silver nanostructures (AgNPs) on elastomeric substrates. Cellulose nanocrystals are a reducing agent and stabilizer, which not only reduce silver nitrate to silver nanoparticles (AgNPs), but also enable AgNPs to form a 1D nanostructure, thus facilitating the construction of a conductive network.

The detection of neurotransmitters can also be accomplished with chemical sensors. A chemical sensor for neurotransmitters in tissue mimicry named NeuroString was presented to control multichannel and multiplexed monoamine sensing in experimental mice, measuring the serotonin kinetics in the gut, while avoiding unwanted stimulation (Fig. 4H) and perturbed peristaltic movements (Fig. 4G).60 Graphene was chosen as the electrode material, while to avoid cracking of the graphene monolayer at less than 5% strain, a laser-induced graphene nanofiber network was embedded in a polystyrene-block-poly(ethylene-ran-butylene)-block-polystyrene (SEBS) elastomer matrix to achieve high levels of softness and stretchability, while preserving the unique electrochemical properties of the nanomaterials (Fig. 4I).

Biological sensors on robotic e-skin

Biological sensors are devices that are capable of detecting a molecule or biological process in a targeted manner with the core elements of biological materials, such as enzymes,61 antibodies,55 and bacteria.62,63 Biosensors play a very important role in the detection of pathogenic biohazards, especially infectious microorganisms. The detection process with sensing robotics does not involve human beings, aiming at avoiding the exposure of operators, which will offer assistance in the prevention and control of infectious diseases such as COVID-19 and monkeypox.64 In addition, biosensors can be used for the real-time monitoring of food processing and transportation to avoid contamination from bacteria, chemicals or other harmful chemicals that can threaten health safety and quality.

Given that it is important and necessary to monitor pathogenic biohazards such as SARS-CoV-2, a robot system named M-bot contained not only chemical sensors, but also unlabeled SARS-CoV-2 virus detectors.55 With the CoV-2 spike 1(S1) protein on the printed multiwalled carbon nanotube (CNT) electrode (Fig. 5A and B), this sensor could detect SARS-CoV-2 specifically and precisely. The S1 protein bound to the antibody, which caused clogging of the electrode surface with a decrease in signal based on the electroactive redox probe (Fe3+/Fe2+). The sensitivity of the assay was in the order of parts per billion (ppb). Although non-specific adsorption may increase the likelihood of detection errors during hydrogel assays, semi-quantitative data acquired in real time can still provide timely and effective feedback to the user and alert them of the presence of biohazards to a certain extent (Fig. 5C).


image file: d3sd00284e-f5.tif
Fig. 5 Schematic and sensing property of biological sensors. (A) Schematic of the printed CNT electrode for SARS-CoV-2 detection. (B) SEM image of the printed CNT electrode for SARS-CoV-2 detection. (C) Response of a CNT sensor in the presence and absence of dry-phase S1. All error bars represent the SD from three sensors. (D) Photograph of the soft robotic system. (E) Test result of the excitation capabilities of the embedded blue wavelength LED circuit, demonstrating the capabilities of the LED circuit to elicit a response from the contained cells. Scale bar, 10 mm. (F) Right: Sensing capabilities of the device were evaluated in an aqueous environment, and chemo sensitive cells were used to determine the presence of IPTG in submerged hydrogels. Left: Device successfully distinguished between IPTG-infused and standard hydrogels, producing a higher normalized florescence ratio between the test and control than the same strains induced in test tubes. Data represent means ± SEM for three separate experiments (*P < 0.05 and **P < 0.05). (G) Schematic illustration of e-glove with four functions (pressure, temperature, ECG, and humidity). (H) Schematic model with finite element analysis (FEA) of the pressure-sensing unit describing the stress distribution on the pressure sensor meandering patterns when stretched horizontally and vertically under pressure up to 225 kPa. (I) ECG signals measured for eight continuous cycles from the chest at unstretched and 15% stretched, showing reduced amplitude under stretched conditions with noise introduction.

In addition to the specific detection of viruses by antibodies, another effective method of detection was utilized based on fluorescence-based bioprobes.65 Nanobiosensors based on fluorescent bioprobes are highly sensitive and have the advantages of being biodegradable, environmentally friendly, less costly, and more integrable than some traditional analytical instrumentation technologies. A biosensor that specifically detected a widely used chemical inducer, isopropyl β-D-1-thiogalactopyranoside (IPTG), was carried in an all-soft biohybrid robot (Fig. 5D).62 The fabrication of this sensor was accomplished by directly integrating the cells into the soft material carrier medium used as the robot shell. By utilizing a porous PDMS–NaHCO3 membrane with a pore size of less than 0.5 μm, rich in elastic properties and optical transparency, it was ensured that the cells were blocked inside the medium without preventing the entry of chemicals, and simultaneously the signal feedback and flexibility of the robotic arm were ensured (Fig. 5E). Genetically engineered cells detecting IPTG undergo a genetic effect and express the green fluorescent protein (GFP). This fluorescent protein is further activated by excitation from a light source, and subsequently the fluorescent signal is converted into an electrical signal using a phototransistor, and the concentration of IPTG is inferred from the final electrical signal obtained (Fig. 5F).

Bioelectrical electrical signals, such as ECG signals, are a very important piece of information about the health status and are critical for the diagnosis and treatment of heart disease. Incorporating ECG acquisition into next-generation healthcare robots can play a very important role in the early detection and diagnosis of heart disease and reduce severe hospitalization. In this case, although most designs integrate ECG detection into wearable devices,66 it should be considered that 24/7 detection does not meet everyone's needs. An equally easy design for ECG signal detection is to design the sensor on the surface of a glove (Fig. 5G). This design can also be easily ported to a robotic arm at a later date.67 Considering the need to design the ECG acquisition sensor on a glove, and even later on a robotic arm, where the aberration formed on the skin by hand movement is usually less than 15%, a study designed ECG electrodes with a corrugated meandering model, where the electrodes had a near-perfect distribution of stresses in all tensile directions (Fig. 5H). By keeping the glove in contact with the chest, it was observed that the dry ECG electrodes on the glove could detect high-quality ECG signals even at 15% aberration (Fig. 5I).

Human–machine interface based on soft e-skin

Robot control is an essential component of human–machine interaction. Traditionally, strain sensors are worn on certain joints of the body to control the robotic movement. These sensors collect motion information, including position, velocity, acceleration, curvature, and even some physical signs such as body temperature and breath. These metrics are relatively easy to obtain and serve as auxiliary and reference functions, making human–machine interaction more accurate. Additionally, many attempts have been made to mimic human senses by adding sensors similar to vision, touch, and smell to robots, enabling them to make judgments in unmanned environments, such as toxic, hostile and remote-unmanned environments.

In recent years, there has been an increasing amount of research on controlling robots using human electrophysiological signals. Electroencephalography (EEG) reflects the overall electrical activity of brain neural tissue on the surface of the cerebral cortex and is an indicator that can best reflect the activity of the brain. There are two main methods for collecting EEG, i.e., invasive and non-invasive brain–machine interfaces. The former mainly collects EEG signals on the scalp but often faces challenges such as low signal quality, high noise, and significant motion artifacts. Alternatively, the latter is more precise, with lower noise and a higher signal-to-noise ratio, but it requires surgical procedures on the subjects, making it difficult to implement. Electrooculography (EOG) and electromyography (EMG) are electrical signals targeting specific tissue structures. EOG is a bioelectric signal caused by the potential difference between the cornea and retina of the eye, and it varies depending on the eye movements. Therefore, an eye-dominant system can be constructed using EOG, such as a spelling system or a wheelchair movement system, to assist individuals with special needs. EMG reflects the superimposition of action potentials from multiple muscle fibers and can overall reflect the muscle activity within a specific region. In general, hand movements correspond to the contraction of the corresponding skeletal muscles. By focusing on collecting EMG signals from specific muscle groups, it is possible to reconstruct the muscle activity of the subject, which can easily correspond to specific motor actions using artificial intelligence techniques. With advancements in electrode technology and artificial intelligence, small, flexible, rapid, and highly accurate systems will be designed.

Artificial intelligence (AI) is driving a new direction in the field of e-skin, particularly in enhancing human–machine interaction. Traditional e-skin sensing has been limited to sensors with few channels and relatively simple modalities. However, with ongoing advancements in electrode and sensing technologies, the emergence of features such as multi-channel sensing and large-area high-resolution sensing necessitates improved algorithms to handle high-throughput data. Moreover, multimodal sensing is becoming increasingly prevalent, requiring novel algorithms to integrate different types of data. Machine learning algorithms, relying on extensive datasets, demonstrate excellent performance in regression and classification tasks. These two major categories of tasks encompass the majority of human–machine interaction issues. Researchers have effectively addressed challenges such as knowledge transfer, ontology perception, and intelligent decision-making by utilizing and innovating machine learning algorithms. This provides an efficient and viable solution for addressing complex practical problems in the realm of human–machine interaction.68

Strain is an important physical quantity that can reflect the degree of bending at human joints and the magnitude of force applied to a specific body part. By directly measuring the bending of various joints in the human body, information about physical movements can be realized. With this information, robots can follow human movements in a very short response time. In this field, resistive materials are widely applied.6,69,70 When a material is subjected to strain, its deformation leads to changes in resistivity. Once there is a certain linear relationship between the stretching of the material and its resistance, the degree of bending and stretching can be assessed quantitatively by measuring the changes in resistance. Currently, there is a trend towards directly printing conductive polymers on flexible substrates such as PDMS, Ecoflex, and PU, or transferring lithographically patterned conductive polymers to substrates to create electronic tattoos. These sensors exhibit good skin conformity, repeatability, and linearity.

In 2014, a transparent and stretchable human–machine interaction system consisting of a person's arm and a robotic arm was proposed.71 A resistive motion sensor and an electrotactile stimulation feedback device were placed on the person's arm, while a pressure sensor was placed on the robotic arm. The movement of the person's arm was recorded to control the robotic arm. When the robotic arm encountered an obstacle, the system provided the subject with electric stimulation as closed-loop feedback. However, in this experiment, only one sensor was used, indicating that the robotic arm had limited degrees of freedom and restricted mobility. Another recent study used a multilayered electronic transfer tattoo (METT) to accurately sense strain and achieve high-precision control of a robotic hand (Fig. 6A and B).72 METT consisted of two layers of resistive sensors and one layer of a heater, with a substrate separating each layer. In this work, the authors employed the layer-by-layer fabrication technique and placed 11 electronic tattoos in the first layer and 4 electronic tattoos in the second layer, corresponding to 15 degrees of freedom in the hand. During the fabrication process, the completed METT (with printed metal–polymer conductors) was directly transferred onto the hand or a glove using a transfer printing method. Through signal analysis, precise and short-term control of the multi-degree-of-freedom robotic hand was achieved (Fig. 6C).


image file: d3sd00284e-f6.tif
Fig. 6 Human–machine interface based on e-skin and electrophysiology. (A) Multilayered electronic tattoos, with 15 strain sensors and 1 heater used for movement monitoring and remote control of robots. (B and C) Strain sensors on the back of hand have good linear property in the case of small deformation degree. (D) Key functional layers of large-area soft electronic interface. (E) Photographs of soft electronic laminated on forearm. (F) EMG signals collected from forearm. (G and H) Flow chart of hybrid EEG/EOG-based human–machine interface system used to help patients restore their independent daily living. (I) EEG signals after surface Laplacian filtering combined with threshold EOG signals control hand exoskeleton. (J and K) Invasive high-density neural electrodes help tetraplegia patents control robotic arm by decoding motor cortex activity. (L) Hand gestures can be decomposed into three dimension movements: the left–right axis (dashed blue line), the towards–away axis (purple line) and the up–down axis (green line). The bottom panel shows the crossing events from all units that contributed to decoding the movement.

EMG signals are the summation of action potentials from multiple motor units within muscle fibers. By measuring non-invasive surface electromyography (sEMG) signals, it is difficult to directly reflect the whole picture of the activated muscles. The sEMG signals represent the overall electrical activity of the muscles in the electrode sensing area over a period of time. Therefore, sEMG alone cannot serve as the gold standard for measuring muscle movement. However, there are differences and repeatability in the electromyographic changes in the same muscle region when performing non-continuous simple gestures. Therefore, EMG is considered as the movement signal of a specific muscle group. With multi-channel EMG acquisition, the movement information from different muscles can be obtained simultaneously. This improves the accuracy of classification tasks and recognition through different channels. Currently, gesture recognition tasks using EMG signals typically employ sensors with more than 4 channels, where sensors with 8 to 64 channels have been presented in the literature.

The EMG signals are relatively weak, with a value of usually less than 1 mV. Therefore, in the design of the amplification circuit, the signals need to be amplified at least 1000 times. In this case, given that the amplification factor is large, the noise caused by movement significantly affects the data quality. This, to mitigate common-mode noise, a differential amplifier was used, with the ground of the amplifier connected to the skin. The energy of the EMG signals was concentrated in the range of 20–250 Hz, and thus filtering the signals below 20 Hz and above 250 Hz could significantly improve the signal-to-noise ratio.

Additionally, researchers are not only focused on gesture classification tasks but also seeking breakthroughs in studying the force exerted during movements and EMG-based property analysis.73 For example, when performing the same grasping motion, the force used to grasp a piece of paper differs from the force used to hold a cup of water. Therefore, deriving force information from EMG signals is an interesting topic. Recent works used integrated EMG amplitude to calculate the force exerted during movements, which was obtained through signal processing. Moreover, some groups utilized deep learning techniques, treating the EMG signals as the input data and the force information the as label, to develop models that predict the force intensity.74 In 2010, a large-area epidermal electrode for collecting electrophysiological signals was reported (Fig. 6D and E).75 Photolithography was used to pattern the electrodes, and then the epidermal electrode with eight channels and 16 electrodes attached to the subject's forearm (Fig. 6F). Through signal classification algorithms, an average accuracy of 89% was achieved in classifying simple gestures. In another study, patients wore surface electrodes to control the rehabilitation of the exoskeleton.76 Exoskeleton movements were intuitively performed by the subject using classified EMG signals measured from reinnervated muscles in real-time, helping for bilateral rehabilitation. In 2019, a study on controlling a robotic arm based on muscle synergies in coordinated movements using the recurrent log-linearized Gaussian mixture network (R-LLGMN) algorithm was reported.77 The algorithm first learned the EMG signals for individual fundamental finger motions and regarded these motions as muscle synergies. Subsequently, various finger motions were expressed as combinations of these synergies. The algorithm was tested in terms of correctly decomposing known complex gestures into fundamental gestures and decomposing unknown gestures into fundamental gestures. The results showed the high accuracy and universality of the algorithm in handling coordinated movements. Moreover, EMG can also be used to control rehabilitation robotic arms, drones, and other applications.

Non-invasive EEG signals are typically collected from the scalp. However, because the cranium is a bad conductor, the signal amplitudes are usually weak, typically less than 100 μV. Furthermore, the signals are greatly influenced by noise and require complex filtering to obtain meaningful information. Similar to sEMG, surface EEG signals reflect the overall brain neural activity in a specific region over a certain period, rather than the excitatory state of individual neurons. Brain activity can be inferred by analyzing EEG signals from specific brain regions. For example, motor-related brain activity is concentrated in the central frontal region, while language-related brain areas include Broca's area. By analyzing the EEG activity in specific brain regions, the subject's current state of movement was be determined. Alternatively, eye movement signals (EOG) are relatively strong, with a maximum amplitude of about 5 mV. The signal collected from eye movements is often interfered with by EEG and EMG signals. However, in signal processing, the interference from EMG signals can be eliminated by filtering out high-frequency signals. The EEG signal itself is weak, to the extent that it often fails to reach the threshold of the filter and gets filtered out.

EEG and EOG signals are commonly collected to assist in patient treatment or to enable individuals with limited mobility to perform actions similar to that of able-bodied individuals. Brain–machine interfaces (BMI) are a popular research topic, where patients typically wear EEG devices to control external devices such as wheelchairs, exoskeletons, and even virtual games by decoding the subject's brain activity.78–80 EOG signals can quantify vertical and horizontal eye movements, enabling the control of wheelchairs using eye movements. EOG is often used in conjunction with EEG to accomplish specific tasks, such as the hybrid EEG/EOG-based hand exoskeleton system, which helps individuals with limb paralysis independently perform hand movements (Fig. 6G).81,82 In this system, the participants used a non-invasive brain/neural hand exoskeleton (B/NHE) that translates brain electric signals associated with the intention to grasp into actual hand-closing motions driven by the exoskeleton, and EOG signals related to voluntary horizontal eye movements (horizontal oculoversions) into hand-opening motions driven by the exoskeleton (Fig. 6H and I). The patient's EEG signals controlled the motion of the robotic hand, while the intensity of the EOG signals controlled the opening and closing of the hand. Analyzing EEG and EOG signals could assist patients in controlling the behavior and switch of the robotic hand, providing high stability and robustness. Additionally, a spelling system controlled entirely by EOG was proposed. This spelling system utilized two eye movements (single blink or wink) for target selection, and the average time for detecting a single target was lower compared to most EEG-based systems.83 The average classification accuracy and information transfer rate of this system were 93.6% and 43.8 bits per min, respectively.

Invasive brain–machine interfaces (BMI) can be done at different scales. One of the shallowest scales is electrocorticography (ECoG), where invasive electrodes are placed on the cortical surface inside the skull. The signal amplitudes range from 0.01–5 mV. The electrodes implanted in the brain cortex can capture local field potentials (LFPs), which are smaller than 1 mV. LFPs reflect the activity of surrounding neurons and do not directly indicate the activity of individual neurons. Compared to non-invasive EEG, invasive BMI offers a higher signal-to-noise ratio and more precise localization. Invasive brain-electrode measurements allow for more accurate capture of brain electrical potential changes, enabling control of external devices or language input.

Invasive neural interfaces can assist individuals with long-term limb paralysis in manipulating physical devices. For example, a 96-channel intracortical silicon microelectrode array was implanted in the participant's motor cortex arm area (Fig. 6J and K).84 Through five years of training and algorithm updates, the goal was achieved. The participant was able to control a robotic arm with their thoughts to grasp a cup, bringing it to her mouth for drinking. Subsequently, this experiment was validated in a clinical setting (Fig. 6L).85 Similarly, another study used a microelectrode array in the motor cortex area of the brain and placed high density percutaneous electrodes in the right upper and lower arm of the participant.86 After recording the brain signals with the detection device, the system translated the electrical activity of the cortex into commands, producing electrical stimulation to the muscles for flexion, extension, and grasping actions. The results showed that the BMI combined with functional electrical stimulation (FES) could assist patients with arm paralysis due to spinal cord injury in engaging in daily activities and improving their quality of life. Similarly, a spelling system based on invasive EEG greatly helped paralyzed patients recover.87

Low-cost method for the fabrication of soft robotic e-skin

Printing technology is a simple and convenient manufacturing technique. In the fabrication of electrodes and circuits, inkjet printing, screen printing, 3D printing, and transfer printing are commonly used processes. Inkjet printing involves spraying ink onto a substrate to create patterns. It can be directly printed on many flexible substrates to meet specific requirements. One key aspect of inkjet printing is the selection and formulation of the ink. Silver ink and graphene ink are often chosen for printing circuits on flexible materials, with the addition of metal ions, polymer compounds, nanoparticles, and nanotubes to enhance the electrical properties and provide specific chemical properties to the printed material.88,89 The concentration of the ink also directly affects the printing results. Screen printing is a technique that uses a screen to apply specific patterns onto a substrate. It offers fast, efficient, and cost-effective production compared to inkjet printing and is suitable for the fabrication of large-scale arrayed electronic devices.

One application of the printing technique is used to build soft systems.90,91 A recent study used conductive inkjet printing on a flexible substrate to construct a human–machine interaction system and employed scalable inkjet printing to fabricate flexible electronic circuits (Fig. 7A).55 A type of nanomaterial-based ink, combining silver nanowires with PDMS, was developed to fabricate pressure sensors. Then, the authors also printed silver ink on PI to fabricate electrodes for collecting surface electromyography (sEMG) signals from muscle activity. These two sensors were integrated into a closed-loop feedback system with a robotic component. Furthermore, inkjet printing and electrode modification techniques were used to create a biosensor capable of detecting toxic and harmful chemicals. Another study demonstrated a large-area stretchable pattern by screen printing silver nanowires (NWs) ink on elastic poly(dimethylsiloxane) (PDMS).92 The pattern exhibited an ultralow sheet resistance of 1.9 ohm s−1, together with strong stability and mechanical repeatability. It endured over 1000 bending and stretching cycles with only 10% strain, showing promising applications. Transfer printing is an interesting printing method that typically involves inverting patterned electrodes onto a specific interface, such as adhesive PEDOT (Fig. 7B).91 The inverted side is then applied to the human skin, and the PEDOT layer is peeled away, transferring the electronic skin to the skin. A typical transfer printing study proposed a Cartan curve-inspired transfer process on a hemispherical substrate to transfer electronic tattoos to the human body for tasks such as multichannel electrocardiography and sign language recognition.


image file: d3sd00284e-f7.tif
Fig. 7 Manufacturing processes of e-skin based on printing, laser, lithography and textile. (A) All-printed human–machine interface system used for EMG control robotic hand and physicochemical feedback. (B) New epidermal electrophysiology monitoring strategy based on soft e-skin and transfer printing. (C) Soft electronic three-dimensional integrated system based on transfer printing and laser ablation technique. (D) Implantable, multilayer bio-optoelectronic flex circuit, in which top and bottom layer were structured via laser ablation. (E) Dynamic three-dimensional metasurface constructed from serpentine beams consisting of thin PI and Au layers processed by spin coating and photolithography technique. (F) Soft and implantable drug delivery device (SID), in which back side is patterned mainly by photolithography. (G) Large-area display textile based on electroluminescent units (EL units) serving to bridge human–machine interface. (H) Acoustic fabrics were used to weave a shirt working as a sound emitter and receiver.

In the photolithography method, the selective exposure of a polymer coating surface to light followed by the use of specific solvents to dissolve the selected areas is employed to generate patterns. Photolithography is a complex physical and chemical process that allows for high-resolution production and the creation of various intricate flat patterns. Besides its crucial role in the semiconductor industry, photolithography also has important applications in diverse fields such as micro-capacitor fabrication, biosensing devices, optical device production, microneedle manufacturing, and LED production.93,94

In the field of micro-supercapacitor (MSC) fabrication, a method utilizing photolithography was reported. Firstly, a composite of a high-density single-wall carbon nanotube (SWNT) network and photoresist were patterned on a substrate. Subsequently, a carbonization process converted the photoresist into amorphous carbon, forming SWNT/carbon current collectors.95 The MSCs demonstrated good conductivity and superior electrochemical performance. In another study, photolithography and oxygen plasma etching were used to create a three-layer structure of PI–Au–PI by repeatedly spin-coating PI (Fig. 7E).96 By patterning the surface contours through photolithography and etching, a dynamically programmable interface driven by the Lorentz force generated by a magnetic field was achieved. This interface could rapidly be transformed into the desired shape within a short period. In the context of a soft implantable drug delivery device (SID) (Fig. 7F), photolithography was employed to fabricate the front and back ends. Photolithography and wet etching were used to pattern a Cu layer and epoxy layer on the front, as well as an Al layer on the back end of the SID using the same method.97

The basic principle of laser ablation is to focus a small, low-power laser beam with high beam quality on a tiny spot, creating a high power density at the focal point. This intense energy causes the material to evaporate instantly, forming small structures such as holes and grooves. Unlike photolithography, laser ablation does not require complex operation processes and offers advantages such as high precision and high speed. Furthermore, laser ablation can not only pattern the substrate material but also induce modifications in graphene films to enhance their properties. In practical ablation processes, parameters such as wavelength, power, and pulse duration are adjustable and important. However, it is important to choose a specific wavelength that corresponds to high absorption in the target material, given that the power and peak intensity are closely related to the patterning throughput. Laser operations can be categorized in the time domain as continuous wave (CW) and pulsed. CW lasers emit a constant laser intensity, while pulse lasers emit laser pulses at a fixed frequency. Pulse lasers can have pulse durations in the femtosecond range, and the use of short-pulse lasers enables higher spatial resolution in ablation.

Laser technology is widely used in the field of flexible material manufacturing and can be used for the fabrication of bio-sensors, soft electronic devices and e-skin.98–103 In a recent study, three-dimensional integrated electronic devices made of elastomers were reported, where laser ablation and controlled soldering were used to create vertical interconnect accesses (VIA). The pattern designs of the electrodes, heaters, and VIA connectors were all fabricated using the laser ablation technique (Fig. 7C).104 It is worth noting that in the fabrication of VIAs, the researchers used a dye to change the color of silicon and employed a 1064 nm laser that was less absorbed by copper to remove the PI and black silicone from the copper. In another study on implantable optoelectronic systems, direct laser ablation was used to engrave the top and bottom copper layers (17.5 μm thick) of the device (Fig. 7D).105 Further advance work reported direct laser writing of 3D structures at micron resolution using a two-photon lithography fabrication process. This method was used to fabricate a 16-channel array with a 300 μm pitch for capturing electrophysiological signals in the brains of mice and birds.

Textile technology is a cost-effective method for the large-scale production of electrodes. Traditional textiles utilize tools such as needles and sewing machines to transform materials such as cotton, wool, and synthetic fibers into clothing and wearable devices. These textiles are characterized by their affordability, ease of production, and ability to be laundered. Currently, fabric is emerging as a promising approach for sensor fabrication, giving rise to luminescent textiles, pressure-sensitive textiles, and sound-sensitive textiles. These materials serve as excellent choices for creating sensors and human–machine interfaces. Furthermore, large-area arrangements of organic transistors and wearable electronic textile can also be realized through textile manufacturing.70,106,107

Using textiles to fabricate screens is a creative idea, where a 6 m-long and 25 cm-wide display textile was reported in a recent work, incorporating 5 × 105 electroluminescent units (EL) (Fig. 7G).108 Each EL unit was formed by the contact between luminescent warp threads and transparent conductive weft threads, while cotton yarn served as the structural framework of the fabric. This luminescent textile functioned as a display, allowing the real-time visualization of the wearer's status when paired with a brainwave sensor. It also enabled communication through the integration of a keyboard made from pressure-sensitive fabric. Another significant endeavor involved the creation of a textile utilizing piezoelectric fibers capable of converting mechanical vibrations into electrical signals (Fig. 7H).109 High-modulus Twaron yarns and cotton yarns oriented at right angles were designed to mimic the structure of the tympanic membrane. In this fabric, a single strand of a piezoelectric elastomeric fiber transducer was incorporated, resulting in a synergistically coupled fabric. This textile was capable of capturing faint audio signals.

Low-cost manufacturing is a promising technological pathway. Leveraging technologies such as laser, printing, and textiles, it is possible to construct sensor units that meet human health requirements on a large scale and in an arrayed manner at a low cost. Furthermore, breakthroughs are anticipated in terms of higher information density, improved interactivity, and extended detection durations. In the future, as technology continues to advance and new medical and physiological challenges are addressed in engineering, the advantages of low-cost manufacturing will contribute to the accessibility to healthcare, the extension of monitoring periods, and the personalization of rehabilitation, benefiting the wider population.

Conclusion

The design of robotic intelligences with high sensitivity and resolution for perceiving the external environment has high requirements of knowledge in multiple fields such as bionics, electronics, materials science, and intelligent robot manufacturing. Thus, to obtain more comprehensive data, the design of highly precise devices is required to capture ideal information. This objective can be achieved by improving the materials and structural configuration of sensors. Material innovations can increase the sensor sensitivity, maintain the flexibility, while maintaining the accuracy of the sensor, and even enhance the ability of robots to adapt to the environment through improved sensor accuracy and range of use, and the use of some cryogenic-resistant materials.110 As sensors continue to advance, the use of a single sensor becomes insufficient, resulting in new challenges associated with the integration of multiple sensors. The combination and assembly of diverse sensors may lead to interference or cross-talk, demanding appropriate calibration or even addressing the issue at the detection principle level. The design of sensor arrays also needs to consider how to minimize the impact of damage to individual sensors on the usage of others.

Intelligent robots equipped with sensors still hold substantial untapped market potential, which is hindered by their high production costs. Currently, various methods have been developed for the fabrication of sensor. Traditional micro and nano lithography techniques are ideal choices for achieving higher performance and precision. However, the high requirements of materials and processing equipment make lithography an expensive option. Thus, novel low-cost techniques such as 3D printing, inkjet printing, screen printing, and laser engraving have been employed for device fabrication to improve the production efficiency and reduce costs during large-scale manufacturing.

Equally important is the processing and utilization of data, which can be achieved by optimizing computational models. Sensor design should possess foresight by enhancing the speed of signal processing through appropriate analysis of the data received. The human brain excels at rapidly analyzing and processing incoming signals, not only due to the exceptional performance of receptors and the musculoskeletal system but also because it possesses a unique way of data processing, allowing it to efficiently and accurately handle received information. This forms a significant pathway for optimizing control algorithms by imitating the brain's approach to data analysis and processing. Currently, several machine learning techniques have been applied to data processing, such as acquiring motion intentions through bioelectrical signals, recognizing object surface features through tactile perception, and analyzing multi-modal data. These algorithms significantly enhance the speed of data analysis, enabling robots to respond more swiftly.

The ultimate goal for intelligent robots is system integration, whether for exploring unknown environments or acting as prosthetics or exoskeletons for individuals with disabilities. This necessitates the formation of a complete system where a device can capture signals reflecting various physical, chemical, and biological characteristics, decode and analyze them, and provide appropriate feedback. This feedback can range from displaying information reflected by signals to controlling the device mechanically, thermally, or even through electrical stimulation, forming a closed loop. Challenges lie in the optimization of hardware integration for sensors and actuators, signal communication and conversion, as well as decoding and processing data. Undoubtedly, some advancements have been realized, but there are significant challenges and opportunities for development in the future.

Author contributions

Y. Yu conceptualized the framework and scope of the paper, wrote, and edited the review. Y. Zhou and Y. Tang led the write-up and gathered the content and figs. Y. Zhou led the revision. All the authors thoroughly read the final manuscript draft and gave permission for its submission.

Conflicts of interest

The authors declare no competing interests.

Acknowledgements

This work was funded by National Natural Science Foundation of China (22304117), Shanghai Pujiang Program (22PJ1411000) and ShanghaiTech University Startup grant.

References

  1. O. M. Omisore, S. P. Han, J. Xiong, H. Li, Z. Li and L. Wang, IEEE Trans. Syst. Man Cybern.: Syst., 2022, 52, 631 Search PubMed.
  2. E. J. Cotter, J. Wang and R. L. Illgen, J. Knee Surg., 2022, 35, 176 CrossRef PubMed.
  3. M. S. Kent, M. G. Hartwig, E. Vallières, A. E. Abbas, R. J. Cerfolio, M. R. Dylewski, T. Fabian, L. J. Herrera, K. G. Jett, R. S. Lazzaro, B. Meyers, B. A. Mitzman, R. M. Reddy, M. F. Reed, D. C. Rice, P. Ross, I. S. Sarkaria, L. Y. Schumacher, W. B. Tisol, D. A. Wigle and M. Zervos, Ann. Surg., 2023, 277, 528 CrossRef PubMed.
  4. M. J. W. Zwart, C. L. M. Nota, T. de Rooij, J. van Hilst, W. W. Te Riele, H. C. van Santvoort, J. Hagendoorn, I. H. M. Borei Rinkes, J. L. van Dam, A. E. J. Latenstein, K. Takagi, K. T. C. Tran, J. Schreinemakers, G. P. van der Schelling, J. H. Wijsman, S. Festen, F. Daams, M. D. Luyer, I. de Hingh, J. S. D. Mieog, B. A. Bonsing, D. J. Lips, M. A. Hilal, O. R. Busch, O. Saint-Marc, H. J. Zehl, 2nd, A. H. Zureikat, M. E. Hogg, I. Q. Molenaar, M. G. Besselink and B. G. Koerkamp, Ann. Surg., 2022, 276, e886 CrossRef PubMed.
  5. D. Calafiore, F. Negrini, N. Tottoli, F. Ferraro, O. Ozyemisci-Taskiran and A. de Sire, Eur. J. Phys. Rehabil. Med., 2022, 58, 1 Search PubMed.
  6. L. E. Osborn, A. Dragomir, J. L. Betthauser, C. L. Hunt, H. H. Nguyen, R. R. Kaliki and N. V. Thakor, Sci. Robot., 2018, 3, eaat3818 CrossRef PubMed.
  7. H. Wang, M. Totaro and L. Beccai, Adv. Sci., 2018, 5, 1800541 CrossRef.
  8. Y. Chi, Y. Li, Y. Zhao, Y. Hong, Y. Tang and J. Yin, Adv. Mater., 2022, 34, 2110384 CrossRef CAS PubMed.
  9. S. C. B. Mannsfeld, B. C. K. Tee, R. M. Stoltenberg, C. V. H. H. Chen, S. Barman, B. V. O. Muir, A. N. Sokolov, C. Reese and Z. Bao, Nat. Mater., 2010, 9, 859 CrossRef CAS PubMed.
  10. M. S. Sarwar, Y. Dobashi, C. Preston, J. K. M. Wyss, S. Mirabbasi and J. D. W. Madden, Sci. Adv., 2017, 3, e1602200 CrossRef PubMed.
  11. S. Pyo, J. Lee, K. Bae, S. Sim and J. Kim, Adv. Mater., 2021, 33, 2005902 CrossRef CAS PubMed.
  12. W. W. Lee, Y. J. Tan, H. Yao, S. Li, H. H. See, M. Hon, K. A. Ng, B. Xiong, J. S. Ho and B. C. K. Tee, Sci. Robot., 2019, 4, eaax2198 CrossRef PubMed.
  13. J. Kim, M. Lee, H. J. Shim, R. Ghaffari, H. R. Cho, D. Son, Y. H. Jung, M. Soh, C. Choi, S. Jung, K. Chu, D. Jeon, S.-T. Lee, J. H. Kim, S. H. Choi, T. Hyeon and D.-H. Kim, Nat. Commun., 2014, 5, 5747 CrossRef CAS PubMed.
  14. W. Heng, S. Solomon and W. Gao, Adv. Mater., 2022, 34, 2107902 CrossRef CAS PubMed.
  15. P. Won, K. K. Kim, H. Kim, J. J. Park, I. Ha, J. Shin, J. Jung, H. Cho, J. Kwon, H. Lee and S. H. Ko, Adv. Mater., 2021, 33, 2002397 CrossRef CAS PubMed.
  16. R. Yang, W. Zhang, N. Tiwari, H. Yan, T. Li and H. Cheng, Adv. Sci., 2022, 9, 2202470 CrossRef PubMed.
  17. J. Burgner-Kahrs, D. C. Rucker and H. Choset, IEEE Trans. Robot., 2015, 31, 1261 Search PubMed.
  18. B. Zhang, Y. Xie, J. Zhou, K. Wang and Z. Zhang, Comput. Electron. Agric., 2020, 177, 105694 CrossRef.
  19. M. V. A. Corpuz, A. Buonerba, G. Vigliotta, T. Zarra, F. Ballesteros, Jr., P. Campiglia, V. Belgiorno, G. Korshin and V. Naddeo, Sci. Total Environ., 2020, 745, 140910 CrossRef CAS PubMed.
  20. M. S. Draz and H. Shafiee, Theranostics, 2018, 8, 1985 CrossRef CAS PubMed.
  21. H. Yao, P. Li, W. Cheng, W. Yang, Z. Yang, H. P. A. Ali, H. Guo and B. C. K. Tee, ACS Mater. Lett., 2020, 2, 986 CrossRef CAS.
  22. W.-Q. Liao, D. Zhao, Y.-Y. Tang, Y. Zhang, P.-F. Li, P.-P. Shi, X.-G. Chen, Y.-M. You and R.-G. Xiong, Science, 2019, 363, 1206 CrossRef CAS PubMed.
  23. M. Yang, Y. Cheng, Y. Yue, Y. Chen, H. Gao, L. Li, B. Cai, W. Liu, Z. Wang, H. Guo, N. Liu and Y. Gao, Adv. Sci., 2022, 9, 2200507 CrossRef CAS PubMed.
  24. Y. Cheng, Y. Ma, L. Li, M. Zhu, Y. Yue, W. Liu, L. Wang, S. Jia, C. Li, T. Qi, J. Wang and Y. Gao, ACS Nano, 2020, 14, 2145 CrossRef CAS PubMed.
  25. Y. Yue, N. Liu, W. Liu, M. Li, Y. Ma, C. Luo, S. Wang, J. Rao, X. Hu, J. Su, Z. Zhang, Q. Huang and Y. Gao, Nano Energy, 2018, 50, 79 CrossRef CAS.
  26. J. Yan, Y. Ma, G. Jia, S. Zhao, Y. Yue, F. Cheng, C. Zhang, M. Cao, Y. Xiong, P. Shen and Y. Gao, Chem. Eng. J., 2022, 431, 133458 CrossRef CAS.
  27. S. Terryn, J. Langenbach, E. Roels, J. Brancart, C. Bakkali-Hassani, Q.-A. Poutrel, A. Georgopoulou, T. George Thuruthel, A. Safaei, P. Ferrentino, T. Sebastian, S. Norvez, F. Iida, A. W. Bosman, F. Tournilhac, F. Clemens, G. Van Assche and B. Vanderborght, Mater. Today, 2021, 47, 187 CrossRef CAS.
  28. J. Park, Y. Lee, J. Hong, Y. Lee, M. Ha, Y. Jung, H. Lim, S. Y. Kim and H. Ko, ACS Nano, 2014, 8, 12020 CrossRef CAS PubMed.
  29. S. Sharma, A. Chhetry, S. Zhang, H. Yoon, C. Park, H. Kim, M. Sharifuzzaman, X. Hui and J. Y. Park, ACS Nano, 2021, 15, 4380 CrossRef CAS PubMed.
  30. H. Yao, W. Yang, W. Cheng, Y. J. Tan, H. H. See, S. Li, H. P. A. Ali, B. Z. H. Lim, Z. Liu and B. C. K. Tee, Proc. Natl. Acad. Sci. U. S. A., 2020, 117, 25352 CrossRef CAS PubMed.
  31. C. M. Boutry, M. Negre, M. Jorda, O. Vardoulis, A. Chortos, O. Khatib and Z. Bao, Sci. Robot., 2018, 3, eaau6914 CrossRef PubMed.
  32. Y. Luo, J. Shao, S. Chen, X. Chen, H. Tian, X. Li, L. Wang, D. Wang and B. Lu, ACS Appl. Mater. Interfaces, 2019, 11, 17796 CrossRef CAS PubMed.
  33. J. C. Yang, J.-O. Kim, J. Oh, S. Y. Kwon, J. Y. Sim, D. W. Kim, H. B. Choi and S. Park, ACS Appl. Mater. Interfaces, 2019, 11, 19472 CrossRef CAS PubMed.
  34. J. Qin, L.-J. Yin, Y.-N. Hao, S.-L. Zhong, D.-L. Zhang, K. Bi, Y.-X. Zhang, Y. Zhao and Z.-M. Dang, Adv. Mater., 2021, 33, 2008267 CrossRef CAS PubMed.
  35. R. B. Mishra, N. El-Atab, A. M. Hussain and M. M. Hussain, Adv. Mater. Technol., 2021, 6, 2001023 CrossRef.
  36. Z. Chen, Y. Wang, Y. Yang, X. Yang and X. Zhang, Chem. Eng. J., 2021, 403, 126388 CrossRef CAS.
  37. M. Zhong, L. Zhang, X. Liu, Y. Zhou, M. Zhang, Y. Wang, L. Yang and D. Wei, Chem. Eng. J., 2021, 412, 128649 CrossRef CAS.
  38. J. Yang, S. Luo, X. Zhou, J. Li, J. Fu, W. Yang and D. Wei, ACS Appl. Mater. Interfaces, 2019, 11, 14997 CrossRef CAS PubMed.
  39. Z. Shen, X. Zhu, C. Majidi and G. Gu, Adv. Mater., 2021, 33, 2102069 CrossRef CAS PubMed.
  40. G. Yao, L. Xu, X. Cheng, Y. Li, X. Huang, W. Guo, S. Liu, Z. L. Wang and H. Wu, Adv. Funct. Mater., 2019, 30, 1907312 CrossRef.
  41. K. Tao, Z. Chen, J. Yu, H. Zeng, J. Wu, Z. Wu, Q. Jia, P. Li, Y. Fu, H. Chang and W. Yuan, Adv. Sci., 2022, 9, 2104168 CrossRef CAS PubMed.
  42. G. Yao, L. Xu, X. Cheng, Y. Li, X. Huang, W. Guo, S. Liu, Z. L. Wang and H. Wu, Adv. Funct. Mater., 2020, 30, 1907312 CrossRef CAS.
  43. X. Wei, H. Li, W. Yue, S. Gao, Z. Chen, Y. Li and G. Shen, Matter, 2022, 5, 1481–1501 CrossRef.
  44. Y. Wu, Y. Liu, Y. Zhou, Q. Man, C. Hu, W. Asghar, F. Li, Z. Yu, J. Shang, G. Liu, M. Liao and R.-W. Li, Sci. Robot., 2018, 3, eaat0429 CrossRef PubMed.
  45. H. Zhao, K. O'Brien, S. Li and R. F. Shepherd, Sci. Robot., 2016, 1, eaai7529 CrossRef PubMed.
  46. Y. Pang, X. Xu, S. Chen, Y. Fang, X. Shi, Y. Deng, Z.-L. Wang and C. Cao, Nano Energy, 2022, 96, 107137 CrossRef CAS.
  47. J. Zhou, Q. Shao, C. Tang, F. Qiao, T. Lu, X. Li, X. J. Liu and H. Zhao, Adv. Mater. Technol., 2022, 7, 2200595 CrossRef.
  48. M. Yang, F. Sun, X. Hu and F. Sun, ACS Appl. Mater. Interfaces, 2023, 15, 44294 CrossRef CAS PubMed.
  49. J. K. Choe, J. Kim, H. Song, J. Bae and J. Kim, Nat. Commun., 2023, 14, 3942 CrossRef CAS PubMed.
  50. G. Schwartz, B. C. K. Tee, J. Mei, A. L. Appleton, D. H. Kim, H. Wang and Z. Bao, Nat. Commun., 2013, 4, 1859 CrossRef PubMed.
  51. H. Shim, K. Sim, F. Ershad, P. Yang, A. Thukral, Z. Rao, H.-J. Kim, Y. Liu, X. Wang, G. Gu, L. Gao, X. Wang, Y. Chai and C. Yu, Sci. Adv., 2019, 5, eaax4961 CrossRef CAS PubMed.
  52. F. Liu, S. Deswal, A. Christou, M. Shojaei Baghini, R. Chirila, D. Shakthivel, M. Chakraborty and R. Dahiya, Sci. Robot., 2022, 7, eabl7286 CrossRef PubMed.
  53. S. Zhao and R. Zhu, Adv. Mater. Technol., 2017, 2, 1700183 CrossRef.
  54. G. Li, S. Liu, L. Wang and R. Zhu, Sci. Robot., 2020, 5, eabc8134 CrossRef PubMed.
  55. Y. Yu, J. Li, S. A. Solomon, J. Min, J. Tu, W. Guo, C. Xu, Y. Song and W. Gao, Sci. Robot., 2022, 7, eabn0495 CrossRef PubMed.
  56. R. K. Mishra, A. Barfidokht, A. Karajic, J. R. Sempionatto, J. Wang and J. Wang, Sens. Actuators, B, 2018, 273, 966 CrossRef CAS.
  57. J. R. Sempionatto, R. K. Mishra, A. Martín, G. Tang, T. Nakagawa, X. Lu, A. S. Campbell, K. M. Lyu and J. Wang, ACS Sens., 2017, 2, 1531 CrossRef CAS PubMed.
  58. R. K. Mishra, L. J. Hubble, A. Martín, R. Kumar, A. Barfidokht, J. Kim, M. M. Musameh, I. L. Kyratzis and J. Wang, ACS Sens., 2017, 2, 553 CrossRef CAS PubMed.
  59. M. Amit, R. K. Mishra, Q. Hoang, A. M. Galan, J. Wang and T. N. Ng, Mater. Horiz., 2019, 6, 604 RSC.
  60. J. Li, Y. Liu, L. Yuan, B. Zhang, E. S. Bishop, K. Wang, J. Tang, Y.-Q. Zheng, W. Xu, S. Niu, L. Beker, T. L. Li, G. Chen, M. Diyaolu, A.-L. Thomas, V. Mottini, J. B. H. Tok, J. C. Y. Dunn, B. Cui, S. P. Paşca, Y. Cui, A. Habtezion, X. Chen and Z. Bao, Nature, 2022, 606, 94 CrossRef CAS PubMed.
  61. S. Zhang, J. Wang, K. Hayashi and F. Sassa, presented in part at the 2021 IEEE Sensors, 2021 Search PubMed.
  62. K. B. Justus, T. Hellebrekers, D. D. Lewis, A. Wood, C. Ingham, C. Majidi, P. R. LeDuc and C. Tan, Sci. Robot., 2019, 4, eaax0765 CrossRef PubMed.
  63. A. L. Furst and M. B. Francis, Chem. Rev., 2018, 119, 700 CrossRef PubMed.
  64. M. Altindis, E. Puca and L. Shapo, Travel Med. Infect. Dis., 2022, 50, 102459 CrossRef CAS PubMed.
  65. S. Sargazi, I. Fatima, M. Hassan Kiani, V. Mohammadzadeh, R. Arshad, M. Bilal, A. Rahdar, A. M. Díez-Pascual and R. Behzadmehr, Int. J. Biol. Macromol., 2022, 206, 115 CrossRef CAS PubMed.
  66. S. S. Kwak, S. Yoo, R. Avila, H. U. Chung, H. Jeong, C. Liu, J. L. Vogl, J. Kim, H. J. Yoon, Y. Park, H. Ryu, G. Lee, J. Kim, J. Koo, Y. S. Oh, S. Kim, S. Xu, Z. Zhao, Z. Xie, Y. Huang and J. A. Rogers, Adv. Mater., 2021, 33, e2103974 CrossRef PubMed.
  67. S. Sharma, G. B. Pradhan, S. Jeong, S. Zhang, H. Song and J. Y. Park, ACS Nano, 2023, 17, 8355 CrossRef CAS PubMed.
  68. B. Shih, D. Shah, J. Li, T. G. Thuruthel, Y. L. Park, F. Iida, Z. Bao, R. Kramer-Bottiglio and M. T. Tolley, Sci. Robot., 2020, 5, eaaz9239 CrossRef PubMed.
  69. L. Dejace, N. Laubeuf, I. Furfaro and S. P. Lacour, Adv. Intell. Syst., 2019, 1, 1900079 CrossRef.
  70. S. Sundaram, P. Kellnhofer, Y. Li, J.-Y. Zhu, A. Torralba and W. Matusik, Nature, 2019, 569, 698 CrossRef CAS PubMed.
  71. S. Lim, D. Son, J. Kim, Y. B. Lee, J.-K. Song, S. Choi, D. J. Lee, J. H. Kim, M. Lee, T. Hyeon and D.-H. Kim, Adv. Funct. Mater., 2015, 25, 375 CrossRef CAS.
  72. L. Tang, J. Shang and X. Jiang, Sci. Adv., 2021, 7, eabe3778 CrossRef CAS PubMed.
  73. J. Gaveau, S. Grospretre, B. Berret, D. E. Angelaki and C. Papaxanthis, Sci. Adv., 2021, 7, eabf7800 CrossRef PubMed.
  74. H. Su, W. Qi, Z. Li, Z. Chen, G. Ferrigno and E. D. Momi, IEEE Trans. Artif. Intell., 2021, 2, 404 Search PubMed.
  75. L. Tian, B. Zimmerman, A. Akhtar, K. J. Yu, M. Moore, J. Wu, R. J. Larsen, J. W. Lee, J. Li, Y. Liu, B. Metzger, S. Qu, X. Guo, K. E. Mathewson, J. A. Fan, J. Cornman, M. Fatina, Z. Xie, Y. Ma, J. Zhang, Y. Zhang, F. Dolcos, M. Fabiani, G. Gratton, T. Bretl, L. J. Hargrove, P. V. Braun, Y. Huang and J. A. Rogers, Nat. Biomed. Eng., 2019, 3, 194 CrossRef PubMed.
  76. D. Leonardis, M. Barsotti, C. Loconsole, M. Solazzi, M. Troncossi, C. Mazzotti, V. P. Castelli, C. Procopio, G. Lamola, C. Chisari, M. Bergamasco and A. Frisoli, IEEE Trans. Haptics, 2015, 8, 140 Search PubMed.
  77. A. Furui, S. Eto, K. Nakagaki, K. Shimada, G. Nakamura, A. Masuda, T. Chin and T. Tsuji, Sci. Robot., 2019, 4, eaaw6339 CrossRef PubMed.
  78. B. J. Edelman, J. Meng, D. Suma, C. Zurn, E. Nagarajan, B. S. Baxter, C. C. Cline and B. He, Sci. Robot., 2019, 4, eaaw6844 CrossRef PubMed.
  79. J. R. Millan, F. Renkens, J. Mourino and W. Gerstner, IEEE. Trans. Biomed. Eng., 2004, 51, 1026 CrossRef PubMed.
  80. L. Bi, X. A. Fan and Y. Liu, IEEE Trans. Hum. Mach. Syst, 2013, 43, 161 Search PubMed.
  81. S. R. Soekadar, M. Witkowski, C. Gómez, E. Opisso, J. Medina, M. Cortese, M. Cempini, M. C. Carrozza, L. G. Cohen, N. Birbaumer and N. Vitiello, Sci. Robot., 2016, 1, eaag3296 CrossRef PubMed.
  82. A. Kilicarslan, S. Prasad, R. G. Grossman and J. L. Contreras-Vidal, Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 2013, 2013, 5606 Search PubMed.
  83. D. Saravanakumar and R. M. Ramasubba, Biomed. Signal Process. Control, 2020, 59, 101898 CrossRef.
  84. L. R. Hochberg, D. Bacher, B. Jarosiewicz, N. Y. Masse, J. D. Simeral, J. Vogel, S. Haddadin, J. Liu, S. S. Cash, P. van der Smagt and J. P. Donoghue, Nature, 2012, 485, 372 CrossRef CAS PubMed.
  85. A. B. Ajiboye, F. R. Willett, D. R. Young, W. D. Memberg, B. A. Murphy, J. P. Miller, B. L. Walter, J. A. Sweet, H. A. Hoyen, M. W. Keith, P. H. Peckham, J. D. Simeral, J. P. Donoghue, L. R. Hochberg and R. F. Kirsch, Lancet, 2017, 389, 1821 CrossRef PubMed.
  86. J. D. Simeral, S. P. Kim, M. J. Black, J. P. Donoghue and L. R. Hochberg, J. Neural Eng., 2011, 8, 025027 CrossRef CAS PubMed.
  87. M. J. Vansteensel, E. G. M. Pels, M. G. Bleichner, M. P. Branco, T. Denison, Z. V. Freudenburg, P. Gosselaar, S. Leinders, T. H. Ottens, M. A. Van Den Boom, P. C. Van Rijen, E. J. Aarnoutse and N. F. Ramsey, N. Engl. J. Med., 2016, 375, 2060 CrossRef PubMed.
  88. J. Li, F. Ye, S. Vaziri, M. Muhammed, M. C. Lemme and M. Östling, Adv. Mater., 2013, 25, 3985 CrossRef CAS PubMed.
  89. Y. Liu, B. Zhang, Q. Xu, Y. Hou, S. Seyedin, S. Qin, G. G. Wallace, S. Beirne, J. M. Razal and J. Chen, Adv. Funct. Mater., 2018, 28, 1706592 CrossRef.
  90. H. Jang, K. Sel, E. Kim, S. Kim, X. Yang, S. Kang, K.-H. Ha, R. Wang, Y. Rao, R. Jafari and N. Lu, Nat. Commun., 2022, 13, 6604 CrossRef CAS PubMed.
  91. J. Byun, Y. Lee, J. Yoon, B. Lee, E. Oh, S. Chung, T. Lee, K.-J. Cho, J. Kim and Y. Hong, Sci. Robot., 2018, 3, eaas9020 CrossRef PubMed.
  92. W. Li, S. Yang and A. Shamim, npj Flexible Electron., 2019, 3, 13 CrossRef.
  93. H. Lee, T. K. Choi, Y. B. Lee, H. R. Cho, R. Ghaffari, L. Wang, H. J. Choi, T. D. Chung, N. Lu, T. Hyeon, S. H. Choi and D. H. Kim, Nat. Nanotechnol., 2016, 11, 566 CrossRef CAS PubMed.
  94. Z. Zhang, W. Wang, Y. Jiang, Y.-X. Wang, Y. Wu, J.-C. Lai, S. Niu, C. Xu, C.-C. Shih, C. Wang, H. Yan, L. Galuska, N. Prine, H.-C. Wu, D. Zhong, G. Chen, N. Matsuhisa, Y. Zheng, Z. Yu, Y. Wang, R. Dauskardt, X. Gu, J. B. H. Tok and Z. Bao, Nature, 2022, 603, 624 CrossRef CAS PubMed.
  95. S. Leimeng, X. Wang, K. Zhang, J. Zou and Q. Zhang, Nano Energy, 2016, 22, 11 CrossRef.
  96. Y. Bai, H. Wang, Y. Xue, Y. Pan, J.-T. Kim, X. Ni, T.-L. Liu, Y. Yang, M. Han, Y. Huang, J. A. Rogers and X. Ni, Nature, 2022, 609, 701 CrossRef CAS PubMed.
  97. H. Joo, Y. Lee, J. Kim, J.-S. Yoo, S. Yoo, S. Kim, A. K. Arya, S. Kim, S. H. Choi, N. Lu, H. S. Lee, S. Kim, S.-T. Lee and D.-H. Kim, Sci. Adv., 2021, 7, eabd4639 CrossRef CAS PubMed.
  98. K. Sim, Z. Rao, Z. Zou, F. Ershad, J. Lei, A. Thukral, J. Chen, Q.-A. Huang, J. Xiao and C. Yu, Sci. Adv., 2019, 5, eaav9653 CrossRef CAS PubMed.
  99. S. Gandla, H. Chae, H. J. Kwon, Y. Won, H. Park, S. Lee, J. Song, S. Baek, Y. D. Hong, D. Kim and S. Kim, IEEE Trans. Ind. Electron., 2022, 69, 4245 Search PubMed.
  100. M. A. Brown, K. M. Zappitelli, L. Singh, R. C. Yuan, M. Bemrose, V. Brogden, D. J. Miller, M. C. Smear, S. F. Cogan and T. J. Gardner, Nat. Commun., 2023, 14, 3610 CrossRef CAS PubMed.
  101. Y. Yu, J. Nassar, C. Xu, J. Min, Y. Yang, A. Dai, R. Doshi, A. Huang, Y. Song, R. Gehlhar, A. D. Ames and W. Gao, Sci. Robot., 2020, 5, eaaz7946 CrossRef PubMed.
  102. Y. Lee, M. J. Low, D. Yang, H. K. Nam, T.-S. D. Le, S. E. Lee, H. Han, S. Kim, Q. H. Vu, H. Yoo, H. Yoon, J. Lee, S. Sandeep, K. Lee, S.-W. Kim and Y.-J. Kim, Light: Sci. Appl., 2023, 12, 146 CrossRef CAS PubMed.
  103. J. Lin, Z. Peng, Y. Liu, F. Ruiz-Zepeda, R. Ye, E. L. G. Samuel, M. J. Yacaman, B. I. Yakobson and J. M. Tour, Nat. Commun., 2014, 5, 5714 CrossRef CAS PubMed.
  104. Z. Huang, Y. Hao, Y. Li, H. Hu, C. Wang, A. Nomoto, T. Pan, Y. Gu, Y. Chen, T. Zhang, W. Li, Y. Lei, N. Kim, C. Wang, L. Zhang, J. W. Ward, A. Maralani, X. Li, M. F. Durstock, A. Pisano, Y. Lin and S. Xu, Nat. Electron., 2018, 1, 473 CrossRef.
  105. P. Gutruf, V. Krishnamurthi, A. Vázquez-Guardado, Z. Xie, A. Banks, C.-J. Su, Y. Xu, C. R. Haney, E. A. Waters, I. Kandela, S. R. Krishnan, T. Ray, J. P. Leshock, Y. Huang, D. Chanda and J. A. Rogers, Nat. Electron., 2018, 1, 652 CrossRef.
  106. S. Ham, M. Kang, S. Jang, J. Jang, S. Choi, T.-W. Kim and G. Wang, Sci. Adv., 2020, 6, eaba1178 CrossRef CAS PubMed.
  107. X. Qing, Y. Wang, Y. Zhang, X. Ding, W. Zhong, D. Wang, W. Wang, Q. Liu, K. Liu, M. Li and Z. Lu, ACS Appl. Mater. Interfaces, 2019, 11, 13105 CrossRef CAS PubMed.
  108. X. Shi, Y. Zuo, P. Zhai, J. Shen, Y. Yang, Z. Gao, M. Liao, J. Wu, J. Wang, X. Xu, Q. Tong, B. Zhang, B. Wang, X. Sun, L. Zhang, Q. Pei, D. Jin, P. Chen and H. Peng, Nature, 2021, 591, 240 CrossRef CAS PubMed.
  109. W. Yan, G. Noel, G. Loke, E. Meiklejohn, T. Khudiyev, J. Marion, G. Rui, J. Lin, J. Cherston, A. Sahasrabudhe, J. Wilbert, I. Wicaksono, R. W. Hoyt, A. Missakian, L. Zhu, C. Ma, J. Joannopoulos and Y. Fink, Nature, 2022, 603, 616 CrossRef CAS PubMed.
  110. B. Ying, R. Z. Chen, R. Zuo, J. Li and X. Liu, Adv. Funct. Mater., 2021, 31, 2104665 CrossRef CAS.

Footnote

These authors contributed equally to this work.

This journal is © The Royal Society of Chemistry 2024