Multimodal deep learning network based hand ADLs tasks classification for prosthetics control

2017 
Natural control methods based on surface electromyography (sEMG) and pattern recognization are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many activities of daily living (ADLs). Difficulty results from limited sEMG signals susceptible to interference in clinical practice, it needs to synthesize hand movement and sEMG to improve classification robustness. Human hand ADLs are made of complex sequences of finger joint movements, and capturing the temporal dynamics is fundamental for successful hand prosthetics control. Current research suggests that recurrent neural networks (RNN) are suited to automate feature extraction for time series domains, and dynamic movement primitives (DMP) can provide representation of hand kinematic primitives. We design a multimodal deep framework for inter-subject ADLs recognization, which: (i) implements heterogeneous sensors fusion; (ii) does not require hand-crafted features; and (iii) contains the dynamics model of the hand ADLs task control. We evaluate our framework on Ninapro datasets. Results show that our framework outperforms competing deep networks with single modal and some of the previous reported results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    4
    Citations
    NaN
    KQI
    []