A Feasible FPGA Weightless Neural Accelerator

2019 
AI applications have recently driven the computer architecture industry towards novel and more efficient dedicated hardware accelerators and tools. Weightless Neural Networks (WNNs) is a class of Artificial Neural Networks (ANNs) often applied to pattern recognition problems. It uses a set of Random Access Memories (RAMs) as the main mechanism for training and classifying information regarding a given input pattern. Due to its memory-based architecture, it can be easily mapped onto hardware and also greatly accelerated by means of a dedicated Register Transfer-Level (RTL) architecture designed to enable multiple memory accesses in parallel. On the other hand, a straightforward WNN hardware implementation requires too much memory resources for both ASIC and FPGA variants. This work aims at designing and evaluating a Weightless Neural accelerator designed in High-Level Synthesis (HLS). Our WNN accelerator implements Hash Tables, instead of regular RAMs, to substantially reduce its memory requirements, so that it can be implemented in a fairly small-sized Xilinx FPGA. Performance, circuit-area and power consumption results show that our accelerator can efficiently learn and classify the MNIST dataset in about 8 times faster than the system's embedded ARM processor.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    3
    Citations
    NaN
    KQI
    []