VerifyNet: Secure and Verifiable Federated Learning

2019 
As an emerging training model with neural networks, federated learning has received widespread attention due to its ability of updating parameters without collecting users’ raw data. However, since adversaries can track and derive participants’ privacy from the shared gradients, federated learning is still exposed to various security and privacy threats. In this paper, we consider two major issues in the training process over Deep Neural Networks (DNNs): (1) How to protect uses’r privacy (i.e., local gradients) in the training process. (2) How to verify the integrity (or correctness) of the aggregated results returned from the server. To solve the above problems, several approaches focusing on secure or privacy-preserving federated learning have been proposed and applied in diverse scenarios. However, it is still an open problem enabling clients to verify whether the cloud server is operating correctly, while guaranteeing user’s privacy in training process. In this paper, we propose VerifyNet, the first privacy-preserving and verifiable federated learning framework. In specific, we first propose a double-masking protocol to guarantee the confidentiality of users’ local gradients during the federated learning. Then, the cloud server is required to provide the Proof about the correctness of its aggregated results to each user. We claim that it is impossible that an adversary can deceive users by forging Proof, unless it can solve the NP-hard problem adopted in our model. In addition, VerifyNet is also supportive for users dropping out during the training process. Extensive experiments conducted on real-world data also demonstrate the practical performance of our proposed scheme.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    135
    Citations
    NaN
    KQI
    []