Issue 9, 2025

Machine learning-assisted flexible dual modal sensor for multi-sensing detection and target object recognition in the grasping process

Abstract

Multi-modal information data is important for the grasping process of robotic fingers. Simultaneous bimodal perceiving of non-contact proximity distances and contact pressure stimuli is widely desired for artificial intelligence electronics, such as electronic skin and health monitoring. It is a challenge to independently detect and process different signals for target recognition without cross-coupling. A machine learning-assisted flexible dual modal sensor (FDMS) was developed for robotic electronic skin application to simultaneously engage in proximity distance and contact pressure measurements to fully process perception during grasping. FDMSs with a multi-layer structure (polydimethylsiloxane film, conductive silver paste, silicone rubber, and hydrogel film in layers) were developed for robotic electronic skin application. FDMSs with conductive silver coils were designed for proximity perception due to the variable capacitance value. A single electrode mode triboelectric nanogenerator (TENG) sensor with frictional electric effect and electrostatic induction was applied for contact pressure measurements. The AlexNet neural network was adopted to target material and hardness recognition from FDMSs in the robot-grasping process, and it achieved a success recognition rate of 93.49% for different materials and 92.22% for different hardness values. Compared to other algorithms, the performance of the AlexNet neural network was superior for target material recognition, which would improve human–robot interaction ability. The robot electronic skin exhibited dual perception feedback capability in proximity and contact perception with excellent flexibility and stability, which has great potential for human–robot interactions, soft robotics, and biomedical applications.

Graphical abstract: Machine learning-assisted flexible dual modal sensor for multi-sensing detection and target object recognition in the grasping process

Transparent peer review

To support increased transparency, we offer authors the option to publish the peer review history alongside their article.

View this article’s peer review history

Article information

Article type
Paper
Submitted
07 Jan 2025
Accepted
09 Mar 2025
First published
19 Mar 2025

Lab Chip, 2025,25, 2247-2255

Machine learning-assisted flexible dual modal sensor for multi-sensing detection and target object recognition in the grasping process

W. Dong, K. Sheng, C. Chen and X. Qiu, Lab Chip, 2025, 25, 2247 DOI: 10.1039/D5LC00020C

To request permission to reproduce material from this article, please go to the Copyright Clearance Center request page.

If you are an author contributing to an RSC publication, you do not need to request permission provided correct acknowledgement is given.

If you are the author of this article, you do not need to request permission to reproduce figures and diagrams provided correct acknowledgement is given. If you want to reproduce the whole article in a third-party publication (excluding your thesis/dissertation for which permission is not required) please go to the Copyright Clearance Center request page.

Read more about how to correctly acknowledge RSC content.

Social activity

Spotlight

Advertisements