Video Person Re-Identification Using Attribute-Enhanced Features

2022 
In this work we propose to boost video-based person re-identification (Re-ID) by using attribute-enhanced feature presentation. To this end, we not only try to use the ID-relevant attributes more effectively, but also for the first time in literature harness the ID-irrelevant attributes to help model training. The former mainly include gender, age, clothing characteristics, etc., which contain rich and supplementary information about the pedestrian; the latter include viewpoint, action, etc., which are seldom used for identification previously. In particular, we use the attributes to enhance the significant areas of the image with a novel Attribute Salient Region Enhance (ASRE) module that can attend more accurately to the body of the pedestrian, so as to better separate the target from the background. Furthermore, we find that many ID-irrelevant but subject-relevant factors, like the view angle and movement of the target pedestrian, have great impact on the two-dimensional appearance of a pedestrian. We then propose to exploit both the ID-relevant and the ID-irrelevant attributes via a novel triplet loss called the Viewpoint and Action-Invariant (VAI) triplet loss. Based on the above, we design an Attribute Salience Assisted Network (ASA-Net) to perform attribute recognition along with identity recognition, and use the attributes for feature enhancement and hard sample mining. Extensive experiments on MARS and DukeMTMC-VideoReID datasets show that our method outperforms the state-of-the-arts. Also, the visualizations of learning results further prove the effectiveness of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    91
    References
    0
    Citations
    NaN
    KQI
    []