Graph Attention Convolutional Network with Motion Tempo Enhancement for Skeleton-Based Action Recognition

2021 
Graph convolutional network (GCN) exhibits advantages in handling non-Euclidean data. Previous works using spatio-temporal graph convolution for skeleton action recognition achieve good performance. However, several limitations still exist. First, the uniform modeling of joint motion assumes that the motion tempo of different joints remains constant, which ignores the dynamic changes in the position offset of each joint during the action. In this work, we propose a robust action feature extractor, graph attention convolutional network with motion tempo enhancement (MTEA-GCN), which captures different joint motion tempos with two streams. Second, the dependencies among bone-connected and spatially separated joints cannot be adequately considered from the graph topology based on the human physical connections. For this reason, we propose a multi-neighborhood graph attention convolution module that fully considers the dependencies among each joint and different neighborhood joints while focusing on discriminative joints. This study experiments on two large-scale skeleton datasets, including Kinetics-Skeleton and NTU RGB+D. Our proposed MTEA-GCN shows good performance with comparable computational complexity and fewer parameters.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []