Federated Learning with Adaptive Communication Compression Under Dynamic Bandwidth and Unreliable Networks

2020 
Abstract Emerging issues such as privacy protection and communication limitations make it not possible to collect all data into data centers, which has driven the paradigm of big data and artificial intelligence to sink to network edge. Because of having the ability to continuously learn newly generated data from the Internet of Things and mobile devices while protecting user privacy, federated learning has been recognized as a new parallel distributed technology for big data and artificial intelligence. However, traditional federated learning is too strict on network throughput and is susceptible to unreliable networks and dynamic bandwidth. To address these communication bottlenecks in federated learning, this study proposes a cloud-edge-clients federated learning architecture Cecilia and designs a new algorithm ACFL. ACFL employs an information sharing method different from the traditional federated learning, and can adaptively compress shared information according to network conditions. The convergence of ACFL is analyzed from a theoretical perspective. In addition, the performance of the ACFL is evaluated through typical machine learning tasks with real datasets, including image classification, sentiment analysis, and next character prediction. Both theoretical and experimental results show that Cecilia and ACFL can better adapt to dynamic bandwidth and unreliable networks when performing federated learning.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    4
    Citations
    NaN
    KQI
    []