Global-Local Graph Convolutional Network for cross-modality person re-identification

2021 
Abstract Visible-thermal person re-identification (VT-ReID) is an important task for retrieving pedestrian between visible and thermal modality. It makes up for the drawbacks of single modality person re-identification in night surveillance applications. Most of the existing methods extract the features of different images/parts independently which ignore the potential relationship between them. In this paper, we propose a novel Global-Local Graph Convolutional Network (GLGCN) to learn discriminative feature representation by modeling the relation through graph convolutional network. The local graph module builds the potential relation of different body parts within each modality to extract discriminative part-level features. The global module constructs the contextual relation of same identity across two modalities to reduce the modality discrepancy. By training the two modules jointly, the robustness of the model can be further improved. The experiment results on the SYSU-MM01 and RegDB datasets demonstrate that our model outperforms the state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    0
    Citations
    NaN
    KQI
    []