Resource-Aware Object Classification and Segmentation for Semi-Autonomous Grasping with Prosthetic Hands

2019 
Myoelectric control of prosthetic hands relies on electromyographic (EMG) signals captured by usually two surface electrodes attached to the human body in different setups. Controlling the hand by the user requires long training and depends heavily on the robustness of the EMG signals. In this paper, we present a visual perception system to extract scene information for semi-autonomous hand-control that allows minimizing required command complexity and leads to more intuitive and effortless control. We present methods that are optimized towards minimal resource demand to derive scene information from visual data from a camera inside the hand. In particular, we show object classification and semantic segmentation of image data realized by convolutional neural networks (CNNs). We present a system architecture, that takes user feedback into account and thereby improves results. In addition, we present an evolutionary algorithm to optimize CNN architecture regarding accuracy and hardware resource demand. Our evaluation shows classification accuracy of 96.5% and segmentation accuracy of up to 89.5% on an in-hand Arm Cortex-H7 microcontroller running at only 400 MHz.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    3
    Citations
    NaN
    KQI
    []