Geometry Structure Preserving based GAN for Multi-Pose Face Frontalization and Recognition

2020 
Face frontalization is the process of converting a face image under arbitrary pose to an image with frontal pose. Benefited from significant improvement of generative adversarial networks (GAN), generative models can use face frontalization to overcome the problem of model degradation owing to the variation of head pose in face recognition. Existing GAN based models can generate a synthesis face image with the same identity as the input, while those models are hard to capture the geometry structure or facial patterns via pixel-wise constraint, e.g. face contour. In this paper, we propose a Geometry Structure Preserving based GAN, i.e. GSP-GAN, for multi-pose face frontalization and recognition. The generator of our model takes the form of a typical auto-encoder, where the encoder extracts identity feature and the decoder synthesizes the corresponding frontal face image. In this process, the perception loss constrains the generator to synthesize a face image with the same identity as the input image. Meanwhile, we adopt real frontal face images as extra input data during training process, where a L1 norm loss is utilized to construct a pixel-wise mapping from arbitrary pose image to frontal image. More importantly, for discriminator of our model, we use the self-attention block to preserve the geometry structure of a face. The discriminator consists of a series of parallel sub-discriminators that carry the global and local attention information. Compared with the state-of-the-art models on datasets of Multi-PIE, LFW and CFP, the proposed GSP-GAN can generate high-quality frontal images under arbitrary pose, and get satisfactory recognition performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    3
    Citations
    NaN
    KQI
    []