Multisensor Remote Sensing Imagery Super-Resolution with Conditional GAN

2021 
Despite the promising performance on benchmark datasets that deep convolutional neural networks have exhibited in single image super-resolution (SISR), there are two underlying limitations to existing methods. First, current supervised learning-based SISR methods for remote sensing satellite imagery do not use paired real sensor data, instead operating on simulated high-resolution (HR) and low-resolution (LR) image-pairs (typically HR images with their bicubic-degraded LR counterparts), which often yield poor performance on real-world LR images. Second, SISR is an ill-posed problem, and the super-resolved image from discriminatively trained networks with norm loss is an average of the infinite possible HR images, thus, always has low perceptual quality. Though this issue can be mitigated by generative adversarial network (GAN), it is still hard to search in the whole solution-space and find the best solution. In this paper, we focus on real-world application and introduce a new multisensor dataset for real-world remote sensing satellite imagery super-resolution. In addition, we propose a novel conditional GAN scheme for SISR task which can further reduce the solution-space. Therefore, the super-resolved images have not only high fidelity, but high perceptual quality as well. Extensive experiments demonstrate that networks trained on the introduced dataset can obtain better performances than those trained on simulated data. Additionally, the proposed conditional GAN scheme can achieve better perceptual quality while obtaining comparable fidelity over the state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []