A Neural Network Engine for Resource Constrained Embedded Systems

2020 
This paper introduces a dedicated neural network engine developed for resource constrained embedded devices such as hearing aids. It implements a novel dynamic two-step scaling technique for quantizing the activations in order to minimize word size and thereby memory traffic. This technique requires neither computing a scaling factor during training nor expensive hardware for on-the-fly quantization. Memory traffic is further reduced by using a 12-element vectorized multiply-accumulate datapath that supports data-reuse. Using a keyword spotting neural network as benchmark, performance of the neural network engine is compared with an implementation on a typical audio digital signal processor used by Demant in some of its hearing instruments. In general, the neural network engine offers small area as well as low power. It outperforms the digital signal processor and results in significant reduction of, among others, power (5×), memory accesses (5.5×), and memory requirements (3×). Furthermore, the two-step scaling ensures that the engine always executes in a deterministic number of clock cycles for a given neural network.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    0
    Citations
    NaN
    KQI
    []