Robust Hand Gesture Recognition Using Multimodal Deep Learning for Touchless Visualization of 3D Medical Images

2019 
Three-dimensional (3D) visualization of medical images is an important technology for efficiently conducting a surgery. However, efficient review of 3D anatomical models is required to maintain sterile field conditions. An operation using touchless interface for gesture recognition is one of the review methods. Real-time hand gesture application for supporting a surgery requires a robust recognition of various gestures. This study proposes a robust hand gesture recognition using multimodal deep learning to perform recognition using color and depth images. We evaluated the recognition accuracy of 25 different gestures and compared its recognition accuracy with conventional recognition methods. Resultantly, it was found that the proposed system achieves better real-time robust recognition than conventional methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    2
    Citations
    NaN
    KQI
    []