Deep feature learning with attributes for cross-modality person re-identification

2020 
Cross-modality person re-identification (Re-ID) between RGB and infrared domains is a hot and challenging problem, which aims to retrieve pedestrian images cross-modality and cross-camera views. Since there is a huge gap between two modalities, the difficulty of solving the problem is how to bridge the cross-modality gap with images. However, most approaches solve this issue mainly by increasing interclass discrepancy between features, and few research studies focus on decreasing intraclass cross-modality discrepancy, which is crucial for cross-modality Re-ID. Moreover, we find that despite the huge gap, the attribute representations of the pedestrian are generally unchanged. We provide a different view of the cross-modality person Re-ID problem, which uses additional attribute labels as auxiliary information to increase intraclass cross-modality similarity. First, we manually annotate attribute labels for a large-scale cross-modality Re-ID dataset. Second, we propose an end-to-end network to learn modality-invariant and identity-specific local features with the joint supervision of attribute classification loss and identity classification loss. The experimental results on a large-scale cross-modality Re-ID benchmarks show that our model achieves competitive Re-ID performance compared with the state-of-the-art methods. To demonstrate the versatility of the model, we report the results of our model on the Market-1501 dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    3
    Citations
    NaN
    KQI
    []