A CGRA based Neural Network Inference Engine for Deep Reinforcement Learning

2018 
Recent ultra-fast development of artificial intelligence algorithms has demanded dedicated neural network accelerators, whose high computing performance and low power consumption enable the deployment of deep learning algorithms on the edge computing nodes. State-of-the-art deep learning engines mostly support supervised learning such as CNN, RNN, whereas very few AI engines support on-chip reinforcement learning, which is the foremost algorithm kernel for decision-making subsystem of an autonomous system. In this work, a Coarse-grained Reconfigurable Array (CGRA) like AI computing engine has been designed for the deployments of both supervised and reinforcement learning. Logic synthesis at the design frequency of 200MHz based on 65nm CMOS technology reveals the physical statistics of the proposed engine of 0.32mm 2 in silicon area, 15.45 mW in power consumption. The proposed on-chip AI engine facilitates the implementation of end-to-end perceptual and decision-making networks, which can find its wide employment in autonomous driving, robotics and UAVs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    3
    Citations
    NaN
    KQI
    []