Effect of motor learning on prosthetic finger controlwith an intuitive myoelectric decoder

2019 
Machine learning-based myoelectric control is regarded as an intuitive paradigm, because of the mapping it creates between muscle co-activation patterns and prosthesis movements that aims to simulate the physiological pathways found in the human arm. Despite that, there has been evidence that closed-loop interaction with a classification-based interface results in user adaptation, which leads to performance improvement with experience. Recently, there has been a focus shift towards continuous prosthesis control, yet little is known about whether and how motor learning affects myoelectric control performance in dexterous, intuitive tasks. We investigate motor learning aspects of independent finger position control by conducting real-time experiments with 10 able-bodied and two transradial amputee subjects. We demonstrate that despite using an intuitive decoder, learning still occurs and leads to significant improvements in performance. We argue that this is due to the lack of an utterly natural control scheme, which is mainly caused by differences in the anatomy of human and artificial hands, movement intent decoding inaccuracies, and lack of proprioception. Finally, we extend previous work in classification-based and wrist continuous control by verifying that offline analyses cannot reliably predict real-time performance, thereby reiterating the importance of validating myoelectric control algorithms with real-time experiments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    42
    References
    0
    Citations
    NaN
    KQI
    []