Tackling Algorithmic Bias in Neural-Network Classifiers using Wasserstein-2 Regularization.

2020 
The increasingly common use of neural network classifiers in industrial and social applications of image analysis has allowed impressive progress these last years. Such methods are however sensitive to algorithmic bias, i.e. to an under- or an over-representation of positive predictions or to higher prediction errors in specific subgroups of images. We then introduce in this paper a new method to temper the algorithmic bias in Neural-Network based classifiers. Our method is Neural-Network architecture agnostic and scales well to massive training sets of images. It indeed only overloads the loss function with a Wasserstein-2 based regularization term whose gradient can be computed at a reasonable algorithmic cost. This makes it possible to use our regularised loss with standard stochastic gradient-descent strategies. The good behavior of our method is assessed on the Adult census, MNIST, and CelebA datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    3
    Citations
    NaN
    KQI
    []