Accelerating deep learning by binarized hardware

2017 
Hardware-oriented approaches to accelerate deep neural network processing are very important for various embedded intelligent applications. This paper is a summary of our recent achievements for efficient neural network processing. We focus on the binarization approach for energy- and area-efficient neural network processor. We first present an energy-efficient binarized processor for deep neural networks by employing inmemory processing architecture. The real processor LSI achieves high performance and energy-efficiency compared to prior works. We then present an architecture exploration technique for binarized neural network processor on an FPGA. The exploration result indicates that the binarized hardware achieves very high performance by exploiting multiple different parallelisms at the same time.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    2
    Citations
    NaN
    KQI
    []