Feature-based Egocentric Grasp Pose Classification for Expanding Human-Object Interactions

2021 
This paper presents a framework for classifying human hand pose, especially in grasping object intuitively. First, we propose a system based on the stereo infra-red image as a sensor that can produce hand coordinates in 3-dimensional space. We use egocentric vision because it can get uniform and natural data with only a single sensor module. Second, we transformed the position to get the angle information for each joint on the finger. Third, we designed an intelligent system based on Multi-Layer Perceptron (MLP) to process angular data to obtain classification results according to the Cutkosky grasp taxonomy. Finally, we compared the results on several similar objects and evaluated their classification accuracy. In the validation phase, the results yielded an accuracy of 16 grasp pose classification is 89,60%. In real-time testing, the results yielded an accuracy of 81.93%. This result shows feature-based learning can reduce the complexity and training time of the MLP. Furthermore, a small amount of training data is sufficient for the training and implementation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []