Towards Scalable Video Analytics at the Edge

2019 
Breakthroughs in deep learning, GPUs, and edge computing have paved the way for always-on, live video analytics. However, to achieve real-time performance, a GPU needs to be dedicated amongst a few video feeds. But, GPUs are expensive resources and a large-scale deployment requires supporting hundreds of video cameras – exorbitant cost prohibits widespread adoption. To ease this burden, we propose Tetris, a system comprising of several optimization techniques from computer vision and deep-learning literature blended in a synergistic manner. Tetris is designed to maximize the parallel processing of video feeds on a single GPU, with a marginal drop in inference accuracy. Tetris performs CPU-based tiling of active regions to combine activities across video feeds. resulting in a condensed input volume. It then runs the deep learning model on this condensed volume instead of individual feeds, which significantly improves the GPU utilization. Our evaluation on Duke MTMC dataset reveals that Tetris can process 4x video feeds in parallel compared to any of the existing methods used in isolation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    1
    Citations
    NaN
    KQI
    []