A Novel Generative Model to Synthesize Face Images for Pose-invariant Face Recognition

2020 
Face recognition is an active research area in computer vision, which has been widely used in various applications such as security, video surveillance and personal identification. Although recent studies in this field have achieved great successes, they usually require an enormous amount of data for training and yet still have difficulties in in-the-wild dataset due to large variation of pose, illumination, expression. Among these unconstrained conditions, pose variation is thought to be the factor that harms face recognition accuracy the most. In order to deal with pose variation, one can fulfill the incomplete UV map extracted from in-the-wild faces, then attach the completed UV map to a fitted 3D mesh and finally generate different 2D faces of arbitrary poses, which then can be used for training or testing face recognition models. In this paper, we propose a novel generative model called ResCUNet-GAN to improve UV map completion. Particularly, we improve the original UV-GAN by stacking two U-Nets and enhancing it with multiple-level residual connections and feature fusion. The experiments on the popular Multi-PIE dataset shows that our model outperforms the original UV-GAN model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []