Image Inpainting: A Contextual Consistent and Deep Generative Adversarial Training Approach

2017 
Context encoder with loss function based on generative adversarial networks (GAN) have been shown superior in image inpainting. However, when using the adversarial loss alone, the texture of the original image and the recovered regions is occasionally inconsistent. In order to solve this problem, this paper introduces a new constraint called contextual consistent loss and proposes a novel algorithm which uses the contextual information combining with adversarial nets to generate texture seamless inpainting. In the proposed algorithm, contextual consistency is enhanced by enforcing the texture of a recovered part similar to those of some part of the existing image when generating the missing parts. Experimental results on Paris Street View Dataset show that the combination of context encoder and contextual information could recover more texture-consistent and more high-quality regions, which demonstrates the advantage of the proposed algorithm.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    2
    Citations
    NaN
    KQI
    []