A Spatio-temporal Hybrid Network for Action Recognition

2019 
Convolutional Neural Networks (CNNs) are powerful in learning spatial information for static images, while they appear to lose their abilities for action recognition in videos because of the neglecting of long-term motion information. Traditional 3D convolution has high computation complexity and the used Global Average Pooling (GAP) on the bottom of network can also lead to unwanted content loss or distortion. To address above problems, we propose a novel action recognition algorithm by effectively fusing 2D and Pseudo-3D CNN to learn spatio-temporal features of video. First, we use Pseudo-3D CNN with proposed Multi-level pooling module to learn spatio-temporal features. Second, the features output by multi-level pooling module are passed through our proposed processing module to make full use of the rich features. Third, a 2D CNN fed with motion vectors is designed to extract motion patterns, which can be regarded as a supplement of Pseudo-3D CNN to make up for the information lost by RGB images. Fourth, a dependency-based fusion method is proposed to fuse the multi-stream features. Finally, the effectiveness of our proposed action recognition algorithm is demonstrated on public UCF101 and HMDB51 datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    2
    Citations
    NaN
    KQI
    []