Performance Enhancement of Edge-AI-Inference Using Commodity MRAM: IoT Case Study

2019 
In this paper we demonstrate how performance of Edge-AI-Inference hardware can be enhanced by effectively using emerging commodity MRAM chips inside specific accelerator pipelines. We consider the special case of IoT-centric ‘normally-off/low-frequency’ AI inference workloads to benchmark the proposed approach. The proposed Non-Volatile AI Inference Accelerator (NVIA) is realized using FPGA and off-the-shelf MRAM chips. NVIA is benchmarked for Human activity recognition (HAR) dataset. Significant power gains, ∼9x (with Toggle-MRAM (180 nm)) and ∼750x (with STT-MRAM (22 nm)), were achieved compared to volatile SRAM.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    5
    Citations
    NaN
    KQI
    []