Study of Fault Tolerance Methods for Hardware Implementations of Convolutional Neural Networks

2019 
The paper concentrates on methods of fault protection of neural networks implemented as hardware operating in fixed-point mode. We have explored possible variants of error occurrence, as well as ways to eliminate them. For this purpose, networks of identical architecture based on VGG model have been studied. VGG SIMPLE neural network that has been chosen for experiments is a simplified version (with smaller number of layers) of well-known networks VGG16 and VGG19. To eliminate the effect of failures on network accuracy, we have proposed a method of training neural networks with additional dropout layers. Such approach removes extra dependencies for neighboring perceptrons. We have also investigated method of network architecture complication to reduce probability of misclassification because of failures in neurons. Based on results of the experiments, we see that adding dropout layers reduces the effect of failures on classification ability of error-prone neural networks, while classification accuracy remains the same as of the reference networks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    5
    Citations
    NaN
    KQI
    []