Deep Learning Framework for Single and Dyadic Human Activity Recognition
2019
Recently, human activity recognition in videos attracts much attention in the computer vision community because of its broad real-life applications. In this context, we introduced a robust two-stream deep learning model with less complexity which utilized only the raw RGB color sequences and their dynamic motion images (DMIs) to recognize complex human activities. The RGB frames are trained using a pre-trained Inception-v3 network and CNN-LSTM with end-to-end training and for dynamic image stream, we fine-tuned the last few layers of the pre-trained model. Through our two-stream network, the features extracted from both, are max fused to increase the classification accuracy. The proposed approach has been evaluated over single-person activity dataset MIVIA Action as well as dyadic SBU Interaction dataset. Our model obtained significant performance improvement over existing similar approaches.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
23
References
0
Citations
NaN
KQI