Efficient Hardware Realizations of Feedforward Artificial Neural Networks.

2021 
This article presents design techniques proposed for efficient hardware implementation of feedforward artificial neural networks (ANNs) under parallel and time-multiplexed architectures. To reduce their design complexity, after the weights of ANN are determined in a training phase, we introduce a technique to find the minimum quantization value used to convert the floating-point weight values to integers. For each design architecture, we also propose an algorithm that tunes the integer weights to reduce the hardware complexity avoiding a loss in the hardware accuracy. Furthermore, the multiplications of constant weights by input variables are implemented under the shift-adds architecture using the fewest number of addition/subtraction operations found by prominent previously proposed algorithms. Finally, we introduce a computer-aided design (CAD) tool, called SIMURG, that can describe an ANN design in hardware automatically based on the ANN structure and the solutions of proposed design techniques and algorithms. Experimental results indicate that the tuning techniques can significantly reduce the ANN hardware complexity under a design architecture and the multiplierless design of ANN can lead to a significant reduction in area and energy consumption, increasing the latency slightly.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    0
    Citations
    NaN
    KQI
    []