Comprehensive feature fusion mechanism for video-based person re-identification via significance-aware attention

2020 
Abstract Video-based person re-identification (Re-ID) is of important capability for artificial intelligence and human–computer interaction. The spatial and temporal features play indispensable roles to comprehensively represent the person sequences. In this paper, we propose a comprehensive feature fusion mechanism (CFFM) for video-based Re-ID. We use multiple significance-aware attention to learn attention-based spatial–temporal feature fusion to better represent the person sequences. Specifically, CFFM consists of spatial attention, periodic attention, significance attention and residual learning. The spatial attention and periodic attention aim to respectively make the system focus on more useful spatial feature extracted by CNN and temporal feature extracted by the recurrent networks. The significance attention is to measure the two features that contribute to the sequence representation. Then the residual learning plays between the spatial and temporal features based on the significance scores for final significance-aware feature fusion. We apply our approach to different representative state-of-the-art networks, proposing several improved networks for improving the video-based Re-ID task. We conduct extensive experimental results on the widely utilized datasets, PRID-2011, i-LIDS-VID and MARS, for the video-based Re-ID task. Results show that the improved networks perform favorably against existing approaches, demonstrating the effectiveness of our proposed CFFM for comprehensive feature fusion. Furthermore, we compare the performance of different modules in CFFM, investigating the varied significance of the different networks, features and sequential feature aggregation modes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    0
    Citations
    NaN
    KQI
    []