Hierarchical Integration of Rich Features for Video-based Person Re-identification

2018 
Person re-identification (ReID) aims to associate the identity of pedestrians captured by cameras across non-overlapped areas. Video-based ReID plays an important role in intelligent video surveillance systems and has attracted growing attention in recent years. In this paper, we propose an end-to-end video-based ReID framework based on the convolutional neural network (CNN) for efficient spatio-temporal modeling and enhanced similarity measuring. Specifically, we build our descriptor of sequences by basic mathematical calculations on the semantic mid-level image features, which avoids the time consuming computations and the loss of spatial correlations. We further hierarchically extract image features from multiple intermediate CNN stages to build multi-level sequence descriptors. For a descriptor at one stage, we design an effective auxiliary pairwise loss which is jointly optimized with a triplet loss. To integrate hierarchical representation, we propose an intuitive yet effective summation-based similarity integration scheme to match identities more accurately. Furthermore, we extend our framework by a multi-model ensemble strategy, which effectively assembles three popular CNN models to represent walking sequences more comprehensively and improve the performance. Extensive experiments on three video-based ReID datasets show that the proposed framework outperforms the state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    6
    Citations
    NaN
    KQI
    []