The Impact of Differential Privacy on Model Fairness in Federated Learning

2020 
Federated learning is a machine learning framework where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the clients. Thus, federated learning could raise “unfairness” according to some fairness metrics. Differential privacy is a privacy model used to protect privacy in federated learning with bounded leakage about the presence of a specific point in the training data. Previous work showed that a reduction in accuracy induced by deep private models disproportionately impacts underrepresented groups. This motivates us to analyze the impact of differential privacy on model fairness in federated learning. In this work, we conduct extensive experiments to evaluate the impact of differential privacy on model fairness in federated learning. Experiments show that, with a proper choice of parameters, differential privacy might improve fairness with an ignoble reduction on accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    1
    Citations
    NaN
    KQI
    []