Stacked lossless deconvolutional network for remote sensing image super-resolution

2019 
Super-resolving a satellite imagery from its low-resolution one has a significant impact on the remote sensing industry. There are many potential applications that can directly benefit from this technique. A convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR). However, most deep CNN architectures do not properly handle the inherent trade-off between localization accuracy and the use of global context. In this paper, we propose a stacked lossless deconvolutional network (SLDN) for remote sensing SR. We fully exploit global context information while guaranteeing the recovery of fine details. Specifically, we design a lossless pooling by reformulating the pixel shuffle operator, and incorporate it with a shallow deconvolutional network. The resulting lossless deconvolution blocks (LDBs) are stacked one by one to enlarge the receptive fields without any information loss. We further design an attentive skip connection to improve gradient flows throughout the LDB. The SLDN can reconstruct high-quality satellite images without noticeable artifacts. We also provide an extensive ablation study showing that all the components proposed in this paper are useful for the remote sensing SR. Experimental comparisons demonstrate the superiority of the proposed method over state-of-the-art methods both qualitatively and quantitatively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []