Face Attribute Manipulation Based on Self-Perception GAN

2020 
Manipulating human facial images between two domains is an important and interesting problem in computer vision. Most of the existing methods address this issue by applying two generators or one generator with extra conditional inputs to generate face images with manipulated attribute. In this paper, we proposed a novel self-perception method based on Generative Adversarial Networks (GANs) for automatic face attribute inverse, where giving a face image with an arbitrary facial attribute the model can generate a new face image with the reversed facial attribute. The proposed method takes face images as inputs and employs only one single generator without being conditioned on other inputs. Profiting from the multi-loss strategy and modified U-net structure, our model is quite stable in training and capable of preserving finer details of the original face images. The extensive experimental results have demonstrated the effectiveness of our method on generating high-quality and realistic attribute-reversed face images.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    0
    Citations
    NaN
    KQI
    []