CBFL: A Communication-Efficient Federated Learning Framework From Data Redundancy Perspective

2021 
Federated learning (FL) is an emerging machinelearning framework, which enables multiple mobile users to collaboratively train a global model without uploading their local sensitive data. Due to limited network bandwidth, communication efficiency has become a significant bottleneck for the implementation of FL. Existing works attempt to improve this situation by reducing the total bits transferred for each client update via data compression. However, these research works are only from the perspective of update parameters to reduce the transmission of redundant parameters. They do not explore the intrinsic reasons for redundant parameters. In this article, we propose a coreset-based FL (CBFL) framework. Instead of training model on full datasets with a regular network model, CBFL uses a much smaller well-matched evolutionary network model on coreset. CBFL indirectly reduces the total transmission bits for each client while achieving a similar accuracy as training with full datasets. CBFL includes a novel distributed coreset construction and adaptive model evolution algorithms. The network model is adaptively adjusted during this process, which dynamically removes the least important connections from the current model. Experimental results with various datasets and models show that CBFL is able to find an optimized evolutionary model that has about 10% of the total number of connections in the original regular model while only about 2% degradation in model accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []