An Encoder-decoder based approach for generating Faces using Facial Attributes using CNN

2021 
This paper addresses the challenge of generating faces using facial attributes. Although there are researches the address the problem of generating faces, they do so by using a facial image as a base and changing the required attributes. To solve this problem, we make CNN models to learn a classifier that can predict these features (1 feature per model) and output their labels. Labels are the enumerated value each attribute can take. Then these models are combined into one model to generate a dataset that maps the above 6 facial features to each image. This prepared dataset is then used to train the final CNN model that learns to generate a 200 × 200 × 3 matrix using a 6 × 1 matrix as input. The output matrix represents the resolution of the image with 3 channels namely, Red, Green and Blue. This 3D array when plotted gives the desired image. The 6 × 1 matrix represents the six labels. To improve the output, the final CNN model is changed and an Auto-encoder and decoder are used. Also, instead of 6 × 1 input array, 55 × 1 input array is used. This is first trained to regenerate images from an input image. The decoder from this trained model is then used for transfer learning. The decoder is retrained to learn the features specified by the 55 × 1 input matrix. Finally, this decoder is used to generate the desired images of size 150 × 150 × 3 using the 55 × 1 input matrix.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    2
    References
    0
    Citations
    NaN
    KQI
    []