A causal look at statistical definitions of discrimination

2020 
Predictive parity and error rate balance are both widely accepted and adopted criteria for assessing fairness of classifiers. The realization that these equally reasonable criteria can lead to contradictory results has, nonetheless, generated a lot of debate/controversy, and has motivated the development of mathematical results establishing the impossibility of concomitantly satisfying predictive parity and error rate balance. Here, we investigate these fairness criteria from a causality perspective. By taking into consideration the data generation process giving rise to the observed data, as well as, the data generation process giving rise to the predictions, and assuming faithfulness, we prove that when the base rates differ across the protected groups and there is no perfect separation, then a standard classifier cannot achieve exact predictive parity. (Where, by standard classifier we mean a classifier trained in the usual way, without adopting pre-processing, in-processing, or post-processing fairness techniques.) This result holds in general, irrespective of the data generation process giving rise to the observed data. Furthermore, we show that the amount of disparate mistreatment for the positive predictive value metric is proportional to the difference between the base rates. For the error rate balance, as well as, the closely related equalized odds and equality of opportunity criteria, we show that there are, nonetheless, data generation processes that can still satisfy these criteria when the base rates differ by protected group, and we characterize the conditions under which these criteria hold. We illustrate our results using synthetic data, and with the re-analysis of the COMPAS data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    1
    Citations
    NaN
    KQI
    []