Adaptive Feature Fusion via Graph Neural Network for Person Re-identification

2019 
Person Re-identification (ReID) targets to identify a probe person appeared under multiple camera views. Existing methods focus on proposing a robust model to capture the discriminative information. However, they all generate a representation by mining useful clues from a given single image, and ignore the intercommunication with other images. To address this issue, we propose a novel network named Feature-Fusing Graph Neural Network (FFGNN), which fully utilizes the relationships among the nearest neighbors of the given image, and allows message propagation to update the feature of the node during representation learning. Given an anchor image, the FFGNN firstly obtains its Top-K nearest images based on the feature generated by the trained Feature-Extracting Network(FEN). We then construct a graph G based on the obtained K+1 images, in which each node represents the feature of an image. The edge of the graph G is obtained by combing the visual similarity and Jaccard similarity between nodes. Within the constructed graph G, FFGNN conducts message propagation and adaptive feature fusion between nodes by iteratively performing graph convolutional operation on the input features. Finally, the FFGNN outputs a robust and discriminative representation which contains the information from its similar images. Extensive experiments on three public person ReID datasets including Market-1501, DukeMTMC-ReID, and CUHK03 demonstrate that the proposed model can achieve significant improvement against state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    5
    Citations
    NaN
    KQI
    []