Virtual Hand Training Platform Controlled Through Online Recognition of Motion Intention

2019 
Patients with amputation or defects in their limbs use prosthetic devices that require a great cognitive and physical effort to control them, especially during rehabilitation and training phases, being this one of the most frequent reasons of why the patients gave up their prosthesis. This paper presents a platform for the training of patients to control prostheses using a virtual robotic hand. The virtual environment combines a graphics engine as Unity with a homemade bracelet that acquires surface Electromyography signals (sEMG) from a patient’s arm. The virtual training platform computes two features from the patient’s sEMG signals. Firstly, the Absolute value of the Summation of Square root (ASS), and then the Mean value of the Square Root (MSR). Once this is done, they are concatenated and passed through a multi-layer neural network, which has been trained to detect 4 different movement intentions generated by the patient (rest, open hand, power and precision grips). Finally, classifier outcomes are used to control the position of joints of a virtual robotic hand that is simulated in Unity. Experimental results have shown a classification accuracy of 86.6% on a patient with a congenital amputation of its left arm.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    4
    References
    0
    Citations
    NaN
    KQI
    []