Balancing out Bias: Achieving Fairness Through Training Reweighting.

2021 
Bias in natural language processing arises primarily from models learning characteristics of the author such as gender and race when modelling tasks such as sentiment and syntactic parsing. This problem manifests as disparities in error rates across author demographics, typically disadvantaging minority groups. Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables. Moreover, evaluation of bias has been inconsistent in previous work, in terms of dataset balance and evaluation methods. This paper introduces a very simple but highly effective method for countering bias using instance reweighting, based on the frequency of both task labels and author demographics. We extend the method in the form of a gated model which incorporates the author demographic as an input, and show that while it is highly vulnerable to input data bias, it provides debiased predictions through demographic input perturbation, and outperforms all other bias mitigation techniques when combined with instance reweighting.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    0
    Citations
    NaN
    KQI
    []