Exploring the Potential of Low-Bit Training of Convolutional Neural Networks
2022
Convolutional neural networks (CNNs) have been widely used in many tasks, but training CNNs is time consuming and energy hungry. Using the low-bit integer format has been proved promising for speeding up and improving the energy efficiency of CNN inference, while CNN training can hardly benefit from such a technique because of the following challenges: 1) the integer data format cannot meet the requirements of the data dynamic range in training, resulting in the accuracy drop; 2) the floating-point data format keeps sizeable dynamic range with much more exponent bits, thus using it results in higher accumulation power than using the integer data format; and 3) there are some specially designed data formats (e.g., with group-wise scaling) that have the potential to deal with the former two problems but common hardware platforms cannot support them efficiently. To tackle all these challenges and make the training phase of CNNs benefit from the low-bit format, we propose a low-bit training framework for CNNs to pursue a better tradeoff between accuracy and energy efficiency: 1) we adopt element-wise scaling to increase the dynamic range of data representation, which significantly reduces the quantization error; 2) group-wise scaling with hardware friendly factor format is designed to reduce the element-wise exponent bits without degrading the accuracy; and 3) we design the customized hardware unit that implements the low-bit tensor convolution arithmetic with our multilevel scaling data format. Experiments show that our framework achieves a superior tradeoff between the accuracy and the bit-width than previous low-bit training studies. For training various models on CIFAR-10, using 1-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within 1%. On larger datasets like ImageNet, using 4-bit mantissa and 2-bit exponent is adequate. Through the energy consumption simulation of the whole network, we can see that training a variety of models with our framework could achieve
$4.9\times $
–
$10.2\times $
higher energy efficiency than full-precision arithmetic.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
39
References
0
Citations
NaN
KQI