Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors
PubDate: Aug 2020
Teams: Nanyang Technological University;University of Technology Sydney;
Writers: Ming Wang 1,3, Zheng Yan 2,3, Ting Wang1, Pingqiang Cai 1, Siyu Gao1, Yi Zeng1, Changjin Wan 1,Hong Wang1, Liang Pan1, Jiancan Yu1, Shaowu Pan1, Ke He1, Jie Lu2 and Xiaodong Chen
Abstract
Gesture recognition using machine-learning methods is valuable in the development of advanced cybernetics, robotics and healthcare systems, and typically relies on images or videos. To improve recognition accuracy, such visual data can be combined with data from other sensors, but this approach, which is termed data fusion, is limited by the quality of the sensor data and the incompatibility of the datasets. Here, we report a bioinspired data fusion architecture that can perform human gesture recognition by integrating visual data with somatosensory data from skin-like stretchable strain sensors made from single-walled carbon nanotubes. The learning architecture uses a convolutional neural network for visual processing and then implements a sparse neural network for sensor data fusion and recognition at the feature level. Our approach can achieve a recognition accuracy of 100% and maintain recognition accuracy in non-ideal conditions where images are noisy and under- or over-exposed. We also show that our architecture can be used for robot navigation via hand gestures, with an error of 1.7% under normal illumination and 3.3% in the dark.