Deep Architectures and Ensembles for Semantic Video Classification

2019 
This paper addresses the problem of accurate semantic labeling of short videos. To this end, a multitude of three different deep nets, ranging from traditional recurrent neural 4 networks (LSTM, GRU), temporal agnostic networks (FV, VLAD, BoW), fully connected neural networks mid-stage AV fusion, and others were considered. Additionally, we also propose a residual architecture-based deep neural network (DNN) for video classification, with state-of-the-art classification performance at significantly reduced complexity. Furthermore, we propose four new approaches to diversity-driven multi-net ensembling, one based on fast correlation measure and three incorporating a DNN-based combiner. We show that significant performance gains can be achieved by ensembling diverse nets and we investigate factors contributing to high diversity. Based on the extensive YouTube8M dataset, we provide an in-depth evaluation and analysis of their behavior. We show that the performance of the ensemble is state-of-the-art achieving the highest accuracy on the YouTube8M Kaggle test data. The performance of the ensemble of classifiers was also evaluated on the HMDB51 and UCF101 datasets, and show that the resulting method achieves comparable accuracy with the state-of-the-art methods using similar input features.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    0
    Citations
    NaN
    KQI
    []