Accelerating Distributed Deep Learning By Adaptive Gradient Quantization

2020 
To accelerate distributed deep learning, gradient quantization technique is widely used to reduce the communication cost. However, the existing quantization schemes suffer from either model accuracy degradation or low compression ratio (arisen from a redundant setting of quantization level or high overhead in determining the level). In this work, we propose a novel adaptive quantization scheme (AdaQS) to explore the balance between model accuracy and quantization level. AdaQS determines the quantization level automatically according to gradient’s mean to standard deviation ratio (MSDR). Then, to reduce the quantization overhead, we employ a computationally-friendly way of moment estimation to calculate the MSDR. Finally, theoretical analysis of AdaQS’s convergence is conducted for non-convex objectives. Experiments demonstrate that AdaQS performs excellently on very deep model GoogleNet with 2.55% accuracy improvement relative to vanilla SGD and achieves 1.8x end-to-end speedup on AlexNet in a distributed cluster with 4*4 GPUs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    6
    Citations
    NaN
    KQI
    []