American Sign Language Detection Using Instance-Based Segmentation

2021 
Deaf & mute people communicate usually in Sign Language with each other. Although the sign-language is very well known among these people it's quite unknown by other people. An attempt is made in this project to bridge the communication gap for the people who don't know Sign Language. The sign language used is the American Sign Language Lexicon Video dataset for this project which can be further extended to different languages. The dataset was in video format so a dominant frame extraction algorithm that would extract the dominant images from the video was used to create an as-per-requirement dataset. We found a lot of work had been already done on finger-spellings recognition but since finger-spellings make it much more complicated in actual time we thought of moving ahead with rather identifying words. The work was completed on a Computer Vision's State-of-the-art model based on Deep Learning that helped in achieving the completion of the task at instance level classification making the recognition task to a better identification at pixel-level which in-turn helps in achieving better results. An attempt has been made here to employ the latest method to make the computer learn sign languages that are independent of a person's complexion, lighting conditions, and orientations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []