Real-Time Gesture Recognition Using 3D Sensory Data and a Light Convolutional Neural Network

2019 
In this work, we propose an end-to-end system that provides both hardware and software support for real-time gesture recognition. We apply a convolutional neural network over 3D rotation data of finger joints rather than over vision-based data, in order to extract high-level intentions (features) users are trying to convey. A pair of customized motion capturing gloves are designed with inertial measurement unit (IMU) sensors to obtain gestural datasets for network training and real-time recognition. A network reduction strategy has been developed to appropriately reduce a network's complexity in both depth and width dimensions while maintaining a high recognition accuracy with the classification model produced by the network. The classification model is able to classify new data samples by scanning a real-time stream of joint rotations during the use of the gloves. Our evaluation results expose the relationships between the network reduction hyperparameters and the change of recognition accuracy. Based on the evaluation, we are able to determine an appropriate version of the light network and achieve 98% accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    9
    Citations
    NaN
    KQI
    []