Top-k sparsification with secure aggregation for privacy-preserving federated learning

2023 
The proposal of federated learning solves problems of data silos and privacy protection in the field of artificial intelligence. However, privacy attacks can infer or reconstruct sensitive information from the submitted gradient, which causes users’ privacy leakage in federated learning. Secure aggregation (SecAgg) protocol can protect users’ privacy while completing federated learning tasks, but it incurs significant communication overhead and wall clock training time on large-scale model training task. Thus, it is difficult to apply SecAgg in bandwidth-limited federated applications. Recently, Rand- sparsification with secure aggregation (Rand- SparseSecAgg) was proposed to optimize SecAgg protocol, while its optimization of communication overhead and training time is limited. In this paper, we replace Rand- sparsification with Top- sparsification, and design a Top- sparsification with secure aggregation (Top- SparseSecAgg) protocol for privacy-preserving federated learning to further reduce communication overhead and wall clock training time. In addition, we optimize the proposed protocol by assigning clients to different groups in the logical layer, which reduces the upper limit of compression ratio and practical communication overhead in Top- SparseSecAgg. Experiments demonstrate that Top- SparseSecAgg can reduce communication overhead by as compared to SecAgg, as compared to Rand- SparseSecAgg, and reduce wall clock training time as compared to SecAgg and as compared to Rand- SparseSecAgg. Thus, our protocol is more suitable in bandwidth-limited federated applications to protect privacy and complete learning task.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []