Viewpoint-robust Person Re-identification via Deep Residual Equivariant Mapping and Fine-grained Features

2019 
Existing person re-identification methods usually directly calculate the similarities of person pictures regardless of their viewpoints. However, matching persons under different viewpoints is difficult since it is intrinsically hard to directly learn a representation which is geometrically invariant to large viewpoint variations. In this paper, we explicitly take viewpoint information into account and propose a novel Deep Residual Equivariant Mapping and Fine-grained Features (DREMFF) approach for viewpoint-robust person re-identification. Specifically, DREMFF hypothesizes that there exists inherent mapping between different viewpoints of a person, and consequently, the global representation discrepancy of a person under different viewpoints will be bridged through equivariant mapping by adaptively adding residuals to original representation according corresponding angle deviation. What's more, based on attention mechanism, DREMFF extracts fine-grained features for each image from multiple salient regions as well as different scales. These captured information is capable of providing assistant decision-making at lower granularities. The mapped global features and the learned fine-grained features work collaboratively to enable viewpoint-robust person re-identification. Experiments on three challenging benchmarks consistently demonstrate the effectiveness of the proposed approach.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []