CEFL: Online Admission Control, Data Scheduling and Accuracy Tuning for Cost-Efficient Federated Learning Across Edge Nodes

2020 
With the proliferation of Internet of Things (IoT), zillions of bytes of data are generated at the network edge, incurring an urgent need to push the frontiers of artificial intelligence (AI) to network edge so as to fully unleash the potential of the IoT big data. To materialize such a vision which is known as edge intelligence, federated learning is emerging as a promising solution to enable edge nodes to collaboratively learn a shared model in a privacy-preserving and communication-efficient manner, by keeping the data at the edge nodes. While pilot efforts on federated learning have mostly focused on reducing the communication overhead, the computation efficiency of those resource-constrained edge nodes has been largely overlooked. To bridge this gap, in this article, we investigate how to coordinate the edge and the cloud to optimize the system-wide cost efficiency of federated learning. Leveraging the Lyapunov optimization theory, we design and analyze a cost-efficient optimization framework CEFL to make online yet near-optimal control decisions on admission control, load balancing, data scheduling, and accuracy tuning for the dynamically arrived training data samples, reducing both computation and communication cost. In particular, our control framework CEFL can be flexibly extended to incorporate various design choices and practical requirements of federated learning, such as exploiting the cheaper cloud resource for model training with better cost efficiency yet still facilitating on-demand privacy preservation. Via both rigorous theoretical analysis and extensive trace-driven evaluations, we verify the cost efficiency of our proposed CEFL framework.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    21
    Citations
    NaN
    KQI
    []