Deep Neural Network with Limited Numerical Precision

2017 
In convolution neural networks, digital multiplication operation is the arithmetic operation of the most space-consuming and power consumption. This paper trains convolutional neural network with three different data formats (float point, fixed point and dynamic fixed point) on two different datasets (MNIST, CIFAR-10). For each data set and each data format, the paper assesses the impact of the multiplication accuracy to the error rate at the end of the training. The results show that the network error rate which is trained with low accuracy fixed point has small difference with the network training error rate which is trained with floating point, and this phenomenon shows that the use of low precision can fully meet the training requirements in the process of training the network.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []