Communication-Efficient Federated Edge Learning via Optimal Probabilistic Device Scheduling
2022
Federated edge learning (FEEL) is a popular distributed learning framework that allows privacy-preserving collaborative model training via periodic learning-updates communication between edge devices and server. Due to the constrained bandwidth, only a subset of devices can be selected to upload their updates at each training iteration. This has led to an active research area in FEEL studying the optimal device scheduling policy for communication time minimization. However, owing to the difficulty in quantifying the exact communication time, prior work in this area can only tackle the problem partially and indirectly by minimizing either the iteration rounds or per-round latency, while the total communication time is determined by both metrics. To close this research gap, we make the first attempt in this paper to formulate and solve the communication time minimization problem. We first derive a tight bound to approximate the remaining communication time through cross-disciplinary effort that combines the learning theory for convergence rate analysis and communication theory for per-round latency analysis. Building on the novel analytical result, an optimized probabilistic device scheduling policy is derived in closed-form by solving the approximate communication time minimization problem. It is found that the optimized policy gradually turns its priority from suppressing the remaining communication rounds to reducing per-round latency as the training process evolves. Extensive experiments based on real-world dataset and a use case on collaborative 3D objective detection in autonomous driving are provided to verify the superiority of the proposed policy over three benchmark policies based on the indirect solution approaches.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI