Communication Reducing Quantization for Federated Learning with Local Differential Privacy Mechanism

2021 
As an emerging framework of distributed learning, federated learning (FL) has been a research focus since it enables clients to train deep learning models collaboratively without exposing their original data. Nevertheless, private information can still be inferred from the communicated model parameters by adversaries. In addition, due to the limited channel bandwidth, the model communication between clients and the server has become a serious bottleneck. In this paper, we consider an FL framework that utilizes local differential privacy, where the client adds artificial Gaussian noise to the local model update before aggregation. To reduce the communication overhead of the differential privacy-protected model, we propose the universal vector quantization for FL with local differential privacy mechanism, which quantizes the model parameters in a universal vector quantization approach. Furthermore, we analyze the privacy performance of the proposed approach and track the privacy loss by accounting the log moments. Experiments show that even if the quantization bit is relatively small, our method can achieve model compression without reducing the accuracy of the global model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []