An Efficient OpenCL-Based FPGA Accelerator for MobileNet

2021 
Convolutional neural network is widely used in image processing and image recognition. In order to obtain higher accuracy, the computational complexity and the scale of model and parameters are increasing. FPGA has become a good choice because of its low power consumption and high flexibility. MobileNet replaces the standard convolution by depthwise convolution and pointwise convolution, which greatly reduces the computational complexity and parameters of the model in the case of less precision loss, so that it can be applied to the equipment with limited computing resources. In this paper, we propose an efficient OpenCL-based FPGA CNN accelerator to realize inferencing acceleration of MobileNet. We designed the convolution layer with modularization. We used pipeline to design a depth separable convolution parallel acceleration scheme, and made full use of the DSP resources of FPGA. This design finally achieved a good balance of hardware resources, processing speed and power consumption. The experiments show that the accelerator can reach the inferencing speed of 32.56ms, and the energy consumption is 20W. The speedup is 4x compared with CPU and energy efficiency is 3x compared with GPU.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    0
    Citations
    NaN
    KQI
    []