Machine learning-assisted flexible dual modal sensor for multi-sensing detection and target object recognition in the grasping process
Abstract
Multi-modal information data is important for the grasping process of robotic fingers. Simultaneous bimodal perceiving of non-contact proximity distances and contact pressure stimuli is widely desired for artificial intelligence electronics, such as electronic skin and health monitoring. It is a challenge to independently detect and process different signals for target recognition without cross-coupling. A machine learning-assisted flexible dual modal sensor (FDMS) was developed for robotic electronic skin application to simultaneously engage in proximity distance and contact pressure measurements to fully process perception during grasping. FDMSs with a multi-layer structure (polydimethylsiloxane film, conductive silver paste, silicone rubber, and hydrogel film in layers) were developed for robotic electronic skin application. FDMSs with conductive silver coils were designed for proximity perception due to the variable capacitance value. A single electrode mode triboelectric nanogenerator (TENG) sensor with frictional electric effect and electrostatic induction was applied for contact pressure measurements. The AlexNet neural network was adopted to target material and hardness recognition from FDMSs in the robot-grasping process, and it achieved a success recognition rate of 93.49% for different materials and 92.22% for different hardness values. Compared to other algorithms, the performance of the AlexNet neural network was superior for target material recognition, which would improve human–robot interaction ability. The robot electronic skin exhibited dual perception feedback capability in proximity and contact perception with excellent flexibility and stability, which has great potential for human–robot interactions, soft robotics, and biomedical applications.