Federated Learning with Gaussian Differential Privacy

2020 
In recent years, federated learning has rapidly become a new research hotspot in the field of secure machine learning. However, unprotected traditional federated learning can easily leak information, which unable to meet users' anonymization needs. Differential privacy is widely used in privacy protection due to its excellent privacy quantification properties, but differential privacy as a lossy protection will reduce the accuracy of the machine learning model. This paper proposes a federated learning algorithm based on Gaussian differential privacy, Noisy-FL, which can more accurately track the changes in privacy loss during model training. Noisy-FL can achieve user-level privacy protection while increasing the number of communication rounds compared to the previous algorithm. Experimental results show that the new algorithm can increase the number of communication rounds by 3 times, and when the total number of clients is small, the model accuracy of the Noisy-FL algorithm is increased by 10% compared to the previous algorithm.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    2
    Citations
    NaN
    KQI
    []