A Bootstrapped Model to Detect Abuse and Intent in White Supremacist Corpora

2020 
Intelligence analysts face a difficult problem: distinguishing extremist rhetoric from potential extremist violence. Many are content to express abuse against some target group, but only a few indicate a willingness to engage in violence. We address this problem by building a predictive model for intent, bootstrapping from a seed set of intent words, and language templates expressing intent. We design both an n-gram and attention-based deep learner for intent and use them as colearners to improve both the basis for prediction and the predictions themselves. They converge to stable predictions in a few rounds. We merge predictions of intent with predictions of abusive language to detect posts that indicate a desire for violent action. We validate the predictions by comparing them to crowd-sourced labelling. The methodology can be applied to other linguistic properties for which a plausible starting point can be defined.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    0
    Citations
    NaN
    KQI
    []