MLFlash-CIM: Embedded Multi-Level NOR-Flash Cell based Computing in Memory Architecture for Edge AI Devices

2021 
Computing-in-Memory (CIM) is a promising method to overcome the well-known “Von Neumann Bottleneck” with computation insides memory, especially in edge artificial intelligence (AI) devices. In this paper, we proposed a 40nm 1Mb Multi-Level NOR-Flash cell based CIM (MLFlash-CIM) architecture with hardware and software co-design. Modeling of proposed MLFlash-CIM was analyzed with the consideration of cell variation, number of activated cells, integral non-linear (INL) and differential non-linear (DNL) of input driver, and quantization error of readout circuits. We also proposed a multi-bit neural network mapping method with 1/n top values and an adaptive quantization scheme to improve the inference accuracy. When applied to a modified VGG-16 Network with 16 layers, the proposed MLFlash-CIM can achieve 92.73% inference accuracy under CIFAR-10 dataset. This CIM structure also achieved a peak throughput of 3.277 TOPS and an energy efficiency of 35.6 TOPS/W for 4-bit multiplication and accumulation (MAC) operations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    0
    Citations
    NaN
    KQI
    []