Lower Limb Motion Recognition by Integrating Multi-modal Features Based on Machine Learning Method

2020 
Various natural and man-made disasters such as wars, diseases and natural disasters have caused a large number of people to become physically disabled, while artificial limbs can help disabled patients to recover certain physical functions, so that they can integrate into society as ordinary people. One of the key techniques in the artificial limbs is the pattern recognition of the limb movement intention, which is still a challenging problem in relative researches. In this research, we proposed a reliable method by using multi-modal sources to recognize the intention of limb movement. First, we collected four signal sources which are EMG, acceleration, knee angles and foot pressure signal at various movement condition of the participant. Then we extract the relevant features from the four different signals. After that we used Relief-F to filter the multi-modal features to screen bad or redundant features. Finally, we compared the performance of three classifiers, which are LDA, SVM, LM-BP to find out the best algorithm to fit this problem. The result shows that when using the classifier of LDA, the average accuracy of movement intention recognition can reach up to 92.46%, while the time consuming is largely decreased compared to the other two models. The result revealed a feasible method to solve the problem of continuous recognition of the intention of limb movement.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    0
    Citations
    NaN
    KQI
    []