Large Scale Sign Language Interpretation.

2019 
Sign language is the primary way of communication between deaf people, but the majority of hearing people do not know how to sign. The reliance of deaf people on interpreters is both inconvenient and cost inefficient. Many research groups have experimented with using machine learning to develop automatic translators. Largely, these efforts have been constrained to restrictive dictionaries or insufficiently small signers or signed content. We introduce the world’s largest sign language dataset to date- a collection of 50,000 video snippets taken from a pool of 10,000 unique utterances signed by 50 signers. We further propose several sequence-to-sequence deep learning approaches to automatically translate from Chinese sign language to both English and Mandarin written text. These methods utilize body joint position, facial expression, as well as finger articulation. While models can overfit on training sets, generalization to unforeseen utterances remains challenging with real-world data. The introduced dataset and methods demonstrate how modern machine learning methods are able to close the communication gap between deaf and hearing people.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    9
    Citations
    NaN
    KQI
    []