Accelerating DNN Training in Wireless Federated Edge Learning Systems

2021 
Training task in classical machine learning models, such as deep neural networks, is generally implemented at a remote cloud center for centralized learning, which is typically time-consuming and resource-hungry. It also incurs serious privacy issue and long communication latency since a large amount of data are transmitted to the centralized node. To overcome these shortcomings, we consider a newly-emerged framework, namely federated edge learning , to aggregate local learning updates at the network edge in lieu of users’ raw data. Aiming at accelerating the training process, we first define a novel performance evaluation criterion, called learning efficiency . We then formulate a training acceleration optimization problem in the CPU scenario, where each user device is equipped with CPU. The closed-form expressions for joint batchsize selection and communication resource allocation are developed and some insightful results are highlighted. Further, we extend our learning framework to the GPU scenario. The optimal solution in this scenario is manifested to have the similar structure as that of the CPU scenario, recommending that our proposed algorithm is applicable in more general systems. Finally, extensive experiments validate the theoretical analysis and demonstrate that the proposed algorithm can reduce the training time and improve the learning accuracy simultaneously.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    71
    Citations
    NaN
    KQI
    []