Attentive Generative Adversarial Network To Bridge Multi-Domain Gap For Image Synthesis

2020 
Despite the significant progress on text-to-image synthesis, automatically generating realistic images remains a challenging task since the location and specific shape of object are not given in the text descriptions. To address these problems, we propose a novel attentive generative adversarial network with contextual loss (AGAN-CL) algorithm. More specifically, the generative network consists of two sub-networks: a contextual network for generating image contours, and a cycle transformation autoencoder for converting contours to realistic images. Our core idea is the injection of image contours into the generative network, which is the most critical part of our network, since it will guide the whole generative network to focus on object regions. In addition, we also apply contextual loss and cycle-consistent loss to bridge multi-domain gap. Comprehensive results on several challenging datasets demonstrate the advantage of the proposed method over the leading approaches, regarding both visual fidelity and alignment with input descriptions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    2
    Citations
    NaN
    KQI
    []