L-Sign: Large-Vocabulary Sign Gestures Recognition System

2022 
Understanding sign gestures is an essential step to helping individuals with hearing impaired. The existing works can only identify a small set of gestures accurately and the accuracy rate drops sharply with an increasing number of gestures. Because there are two challenges—a large number of similar gestures in sign language and the various signing speed of different people. Based on commercial smart bracelets, this article proposes a large-vocabulary sign language recognition system (which we call L-sign). First, we propose an entropy-based forward and backward matching algorithm to segment each gesture signal. Second, we design a gesture recognizer including a candidate gesture generator and semantic-based voter. The candidate gesture generator is aimed at providing candidate gesture designs based on a 3-branch convolutional neural network. The purpose of a semantic-based voter is to select the target gesture from candidate gestures by scoring, where the semantic distances between the last gesture in the current sentence and any candidate gestures is calculated, and a multilayer k-means algorithm is proposed to obtain a multilayer sign word structure to complete the scores of candidate gestures. Lastly, we deployed L-sign on the MYO bracelet. For 200 commonly used Chinese sign gestures, the experimental results show that the average accuracy rate was greater than 90%.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    0
    Citations
    NaN
    KQI
    []