A Twofold Lookup Table Architecture for Efficient Approximation of Activation Functions
2020
In this article, we propose a novel approach to reduce hardware resource consumption when neural networks (NNs) are deployed on field-programmable gate array (FPGA) boards. Rather than using a classical approach with lookup tables (LUTs) to approximate the activation functions of an NN, the proposed solution is based on a twofold LUT (t-LUT) architecture, which comprises an error-LUT (e-LUT) and a data-LUT (d-LUT), in order to achieve high precision and speed as well as low hardware resource consumption. The efficiency of the proposed approach was tested against multiple earlier approaches. Our solution showed that the compressibility of the previously referenced works, which were based on single LUTs, could be improved by up to 94.44% and those that were based on a range addressable LUT (RALUT) by up to 6.35% in the examined case of a hyperbolic tangent (tanh) activation function. Moreover, when RALUT and our architecture were combined, it improved the compressibility of the RALUT-based result by up to additional 10.21% for a tanh activation function. The designed architecture had an initial latency of 39.721 ns, when tested with a 50-MHz clock, to simultaneously retrieve data from the d-LUT and t-LUTs.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
18
References
5
Citations
NaN
KQI