Multi-Stage Hybrid Federated Learning Over Large-Scale D2D-Enabled Fog Networks
2022
Federated learning has generated significant interest, with nearly all works focused on a “star” topology where nodes/devices are each connected to a central server. We migrate away from this architecture and extend it through the
network
dimension to the case where there are multiple layers of nodes between the end devices and the server. Specifically, we develop multi-stage hybrid federated learning (
MH-FL
), a hybrid of intra-and inter-layer model learning that considers the network as a
multi-layer cluster-based structure.
MH-FL
considers the
topology structures
among the nodes in the clusters, including local networks formed via device-to-device (D2D) communications, and presumes a
semi-decentralized architecture
for federated learning. It orchestrates the devices at different network layers in a collaborative/cooperative manner (i.e., using D2D interactions) to form
local consensus
on the model parameters and combines it with multi-stage parameter relaying between layers of the tree-shaped hierarchy. We derive the upper bound of convergence for
MH-FL
with respect to parameters of the network topology (e.g., the spectral radius) and the learning algorithm (e.g., the number of D2D rounds in different clusters). We obtain a set of policies for the D2D rounds at different clusters to guarantee either a finite optimality gap or convergence to the global optimum. We then develop a distributed control algorithm for
MH-FL
to tune the D2D rounds in each cluster over time to meet specific convergence criteria. Our experiments on real-world datasets verify our analytical results and demonstrate the advantages of
MH-FL
in terms of resource utilization metrics.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI