Fine-grained-based multi-feature fusion for occluded person re-identification

2022 
Many previous occluded person re-identification(re-ID) methods try to use additional clues (pose estimation or semantic parsing models) to focus on non-occluded regions. However, these methods extremely rely on the performance of additional clues and often capture pedestrian features by designing complex modules. In this work, we propose a simple Fine-Grained Multi-Feature Fusion Network (FGMFN) to extract discriminative features, which is a dual-branch structure consisting of global feature branch and partial feature branch. Firstly, we utilize a chunking strategy to extract multi-granularity features to make the pedestrian information contained in it more comprehensive. Secondly, a spatial transformer network is introduced to localize the pedestrian’s upper body, and then introduce a relation-aware attention module to explore the fine-grained information. Finally, we fuse the features obtained from the two branches to obtain a more robust pedestrian representation. Extensive experiments verify the effectiveness of our method under the occlusion scenario.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []