A Transfer Learning Model for Gesture Recognition Based on The Deep Feature Extracted by CNN

2021 
The sEMG-based hand gesture recognition is prevalent in human-computer interface (HCI) systems. However, the generalization of the recognition model does not perform well on cross-subject and cross-day. Transfer learning, which applies the pre-trained model to another task, has demonstrated its effectiveness in solving this kind of problem. In this regard, this paper first proposes a multi-scale kernel convolutional neural network (MKCNN) model to extract and fuse multi-scale features of the multi-channel sEMG signals. Based on the proposed MKCNN model, a transfer learning model named TL-MKCNN combines the MKCNN and its siamese network by a custom distribution normalization module (DNM) and a distribution alignment module (DAM) to realize domain adaptation. The DNM can cluster the deep features extracted from different domains to their category center points embedded in the feature space, and the DAM further aligns the overall distribution of the deep features from different domains. The MKCNN model and the TL-MKCNN model are tested on various benchmark databases to verify the effectiveness of the transfer learning framework. The experimental results show that, on the benchmark database NinaPro DB6, the average accuracies of TL-MKCNN can achieve 97.22% on within-session, 74.48% on cross-subject, and 90.30% on cross-day, which are 4.31%, 11.58% and 5.51% higher than those of the MKCNN model on within-session, cross-subject, and cross-day, respectively. Compared with the state-of-the-art works, the TL-MKCNN obtains 13.38% and 37.88% accuracy improvement on cross-subject and cross-day, respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []