Power Efficient Object Detector with an Event-Driven Camera on an FPGA

2018 
We propose an object detection system using a sliding window method for an event-driven camera which outputs a subtracted frame (usually a binary value) when changes are detected in captured images. Since it skips unchanged portions, our system operates faster and with lower power consumption than a system using a straightforward sliding window approach. Since the event-driven camera output consists of binary precision frames, an all binarized convolutional neural network (ABCNN) can be available. Although the realization of all binarization decreases the classification accuracy, it allows all convolutional layers to share the same binarized convolutional circuit, thereby reducing the area requirement. We implemented our proposed method on a zcu102 FPGA evaluation board and then evaluated it using the PETS 2009 dataset. The results show that even though our proposed method reduced a recognition accuracy by 6 points, the computation time for an entire frame was 157 times faster than the time of the BCNN without an event-driven camera. Compared with the object detector on the mobile GPU (NVIDIA Jetson TX2), frames per second (FPS) of the FPGA system was 4.3 times faster, and the performance per power efficiency was approximately 54.2 times higher.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []