Power, Hate Speech, Machine Learning, and Intersectional Approach

2021 
The advent of social media has increased digital content and, with it, hate speech. Advancements in machine learning algorithms help detect online hate speech at scale; nevertheless, these systems are far from perfect. Human-annotated hate speech data, used to train automated hate speech detection systems, is susceptible to racial/ethnic, gender, and other bias. To address societal and historical biases in automated hate speech detection, scholars and practitioners need to focus on the power dynamics: who decides what comprises hate speech. Examining inter- and intra-group dynamics can facilitate understanding of this causal mechanism. This intersectional approach deepens knowledge of the limitations of automated hate speech detection systems and bridges social science and machine learning literature on biases and fairness.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []