Recursive Binary Neural Network Training Model for Efficient Usage of On-Chip Memory

2019 
We present a novel deep learning model for a neural network that reduces both computation and data storage overhead. To do so, the proposed model proposes and combines a binary-weight neural network (BNN) training, a storage reuse technique, and an incremental training scheme. The storage requirements can be tuned to meet the desired classification accuracy, storing more parameters on an on-chip memory, and thereby reducing off-chip data storage accesses. Our experiments show 4-6x reduction in weight storage footprint when training binary deep neural network models. On the FPGA platform, this results in a reduced amount of off-chip accesses, enabling our model to train a neural network in 14x shorter latency, as compared to the conventional BNN training method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    6
    Citations
    NaN
    KQI
    []