Sketch2Photo: Synthesizing photo-realistic images from sketches via global contexts
2023
Sketch-to-image synthesis aims to generate realistic images that match the input sketches or edge maps exactly. Most known sketch-to-image synthesis methods use various generative adversarial networks (GANs) that are trained with numerous pairs of sketches and real images. Because of the convolution locality, the low-level layers of the generators in these GANs lack global perception ability, causing feature maps derived from them easily to overlook global cues. Since the global receptive field is crucial for acquiring the non-local structures and features of sketches, the absence of global contexts will impact the generation of high-quality images. Some recent models turn to self-attention to construct global dependencies. However, they are not viable for large feature maps for the quadratic computational complexity concerning the size of feature maps. To address these problems, in this work, we propose Sketch2Photo — a new image synthesis approach that can capture global contexts as well as local features to generate photo-realistic images from weak or partial sketches or edge maps. We employ fast Fourier convolution (FFC) residual blocks to create global receptive fields in the bottom layers of the network and incorporate Swin Transformer block (STB) units to obtain long-range global contexts for large-size feature maps efficiently. We also present an improved spatial attention pooling (ISAP) module to relax the strict alignment requirements between incomplete sketches and generated images. Quantitative and qualitative experiments on multiple public datasets demonstrate the superiority of the proposed approach over many other sketch-to-image synthesis methods. The project code is available at .
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI