language-icon Old Web
English
Sign In

Toxic Comment Detection using LSTM

2020 
While online communication media acts as a platform for people to connect, collaborate and discuss, overcoming the barriers for communication, some take it as a medium to direct hateful and abusive comments that may prejudice an individual's emotional and mental well being. Explosion of online communication makes it virtually impossible for filtering out the hateful tweets manually, and hence there is a need for a method to filter out the hate-speech and make social media cleaner and safer to use. The paper aims to achieve the same by text mining and making use of deep learning models constructed using LSTM neural networks that can near accurately identify and classify hate-speech and filter it out for us. The model that we have developed is able to classify given comments as toxic or nontoxic with 94.49% precision, 92.79% recall and 94.94% Accuracy score.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []