UD-GAN: a dense connected generative adversarial network for pixel-level modal translation of multimodal images

2021 
Modal translation between multimodal images is an effective complementary scheme when images with some certain modal are difficult to obtain. Since pixel level image modal translation method can obtain the images with the highest quality, it has become a research hotspot in recent years. Generative adversarial network (GAN) is a network for image generation, due to the complex structure of GAN and the complexity of the image generation task, the training results of GAN are not stable. In this paper, on the basis of U-nets, the dense block is used to increase the feature information in the subsampling encoding and up-sampling decoding operation, so as to reduce the loss of information and obtain higher quality images. At the same time, the dense long connection is introduced to connect the encoding and decoding operations of the same stage, so that the network can effectively combine the features at low and high level, and improve the performance of the network. Experimental results show that the proposed method is effective in modal translation of multimodal images, and the image quality is better than some state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []