Redundancy Removing Aggregation Network with Distance Calibration for Video Face Recognition

2020 
Attention-based techniques have been successfully used for rating image quality, and have been widely employed for set-based face recognition. Nevertheless, for video face recognition, where the base Convolutional Neutral Network (CNN) trained on large-scale data already provides discriminative features, fusing features with only predicted quality scores to generate representation is likely to cause duplicate sample dominant problem, and degrade performance correspondingly. To resolve the problem mentioned above, we propose a Redundancy Removing Aggregation Network (RRAN) for video face recognition. Compared with other quality-aware aggregation schemes, RRAN can take advantage of similarity information to tackle the noise introduced by redundant video frames. By leveraging metric learning, RRAN introduces a distance calibration scheme to align distance distributions of negative pairs of different video representations, which improves the accuracy under a uniform threshold. A series of experiments are conducted on multiple realistic datasets to evaluate the performance of RRAN, including YouTube Faces, IJB-A, and IJB-C. In comprehensive experiments we demonstrate that our method can diminish the overall influence of poor quality components with large proportion in the video and further improve the overall recognition performance with individual difference. Specifically, RRAN achieves a 96.84% accuracy on YouTube Face, outperforming all existing aggregation schemes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []