Sign Language Recognition using Convolutional Neural Networks in Machine Learning

2021 
Sign language is a way or means of communication used by individuals with speaking and hearing impairments. It is one of the essential means of communication for such individuals to stay connected with the rest of the world and to express their ideas, needs or beliefs. There is a great need for an efficient and cost-effective real-time translation software or tool in the modern-day world to understand what the disabled individual is trying to express with accuracy. The proposed system is a real-time translation software or tool used for the conversion of hand gestures into natural languages such as English used by people for communication. The translated data will interpret the alphabet or number associated with the sign shown to the live camera feed. The software proposed in this project is created using Python, NumPy, OpenCV, LabelImg and TensorFlow. The image or video obtained from the camera device will be processed using convolutional neural networks (CNN). The CNN model is pre-trained with a large dataset from open sources or using a custom dataset on sign language gestures. Based on the recognition rate and prediction analysis from the CNN model, the provided image or video will be classified as the respective Alphabet or number from the American Sign Language Set. This helps the individuals to understand the sign language used by disabled individuals with ease.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    3
    References
    0
    Citations
    NaN
    KQI
    []