Membership Inference Defense in Distributed Federated Learning Based on Gradient Differential Privacy and Trust Domain Division Mechanisms

2022 
Distributed federated learning models are vulnerable to membership inference attacks (MIA) because they remember information about their training data. Through a comprehensive privacy analysis of distributed federated learning models, we design an attack model based on generative adversarial networks (GAN) and member inference attacks (MIA). Malicious participants (attackers) utilize the attack model to successfully reconstruct training sets of other regular participants without any negative impact on the global model. To solve this problem, we apply the differential privacy method to the training process of the model, which effectively reduces the accuracy of member inference attacks by clipping the gradient and adding noise to it. In addition, we manage the participants hierarchically through the method of trust domain division to alleviate the performance degradation of the model caused by differential privacy processing. Experimental results show that in distributed federated learning, our designed scheme can effectively defend against member inference attacks in white-box scenarios and maintain the usability of the global model, realizing an effective trade-off between privacy and usability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []