Logarithmic Compression for Memory Footprint Reduction in Neural Network Training

2017 
Deep neural network occupies a large memory space during its training phase. Since a computing environment in future IoT devices is restricted, a more hardware-aware approach with a smaller energy and memory footprint must be considered. In this paper, we propose a novel training method of neural network to decrease the memory usage by optimizing the representation format of temporary data in the training phase. Most of gradient values in the training are likely to be around zero. Our approach employs the logarithmic quantization that express a numerical value logarithmically for reducing the bit width. We evaluate the proposed method in the points of memory footprint and prediction accuracy. The results show that the proposed method effectively reduces memory footprint by about 60% with a slight degradation of the prediction accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    1
    Citations
    NaN
    KQI
    []