Multiscale Generative Adversarial Network Based on Wavelet Feature Learning for SAR-to-Optical Image Translation

2022 
The synthetic aperture radar (SAR) system is a kind of active remote sensing, which can be carried on a variety of flight platforms and can observe the Earth under all-day and all-weather conditions, so it has a wide range of applications. However, the interpretation of SAR images is quite challenging and not suitable for nonexperts. In order to enhance the visual effect of SAR images, this article proposes a multiscale generative adversarial network based on wavelet feature learning (WFLM-GAN) to implement the translation from SAR images to optical images; the translated images not only retain the key content of SAR images but also have the style of optical images. The main advantages of this method over the previous SAR-to-optical image translation (S2OIT) methods are given as follows. First, the generator does not learn the mapping from SAR images to optical images directly but learns the mapping from SAR images to wavelet features and then reconstructs the gray-scale images to optimize the content, increasing the mapping relationships and helping to learn more effective features. Second, a multiscale coloring network based on detail learning and style learning is designed to further translate the gray-scale images into optical images, which makes the generated images have an excellent visual effect with details closer to real images. Extensive experiments on SAR image datasets in different regions and seasons demonstrate the superior performance of WFLM-GAN over the baseline algorithms in terms of structural similarity (SSIM), the peak signal-to-noise ratio (PSNR), the Frechet inception distance (FID), and the kernel inception distance (KID). Comprehensive ablation studies are also carried out to isolate the validity of each proposed component. Our codes will be available at https://github.com/G2022G/WFLM-GAN .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    0
    Citations
    NaN
    KQI
    []