MHAT: An Efficient Model-Heterogenous Aggregation Training Scheme for Federated Learning

2021 
Abstract Federated Learning allows multiple participants to jointly train a global model while guaranteeing the confidentiality and integrity of private datasets. However, current server aggregation algorithms for federated learning only focus on model parameters, resulting in heavy communication costs and low convergence speed. Most importantly, they are unable to handle the scenario wherein different clients hold different local models with various network architectures. In this paper, we view these challenges from an alternative perspective: we draw attention to what should be aggregated and how to improve convergence efficiency. Specifically, we propose MHAT, a novel model-heterogenous aggregation training federated learning scheme which exploits a technique of Knowledge Distillation (KD) to extract the update information of the heterogenous model of all clients and trains an auxiliary model on the server to realize information aggregation. MHAT relaxes clients from fixing on an unified model architecture and significantly reduces the required computing resources while maintaining acceptable model convergence accuracy. Various experiments verify the effectiveness and applicability of our proposed scheme.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    2
    Citations
    NaN
    KQI
    []