SRDRL: A Blind Super-Resolution Framework With Degradation Reconstruction Loss

2021 
Recent years have witnessed the remarkable success of deep learning-based single image super-resolution (SISR) methods. However, most of the existing SISR methods assume that low-resolution (LR) images are purely bicubic downsampled from high-resolution (HR) images. Once the actual degradation is not bicubic, their outstanding performance is hard to maintain. Since the real-world image degradation process can be modeled by a combination of downsampling, blurring, and noise, several SR methods have been proposed to super-resolve LR images with multiple blur kernels and noise levels. However, these SR methods require prior knowledge of the degradation process, which is difficult to obtain in practical applications. To address these issues, we propose a degradation reconstruction loss (DRL), which captures the degradation-wise differences between SR images and HR images via a degradation simulator. Empowered by the degradation simulator, the proposed loss, and an efficient SR network, a blind SR framework (SRDRL) without prior knowledge that can handle multiple degradations is formed. Extensive experimental results demonstrate that the proposed SRDRL outperforms the state-of-the-art blind SR methods and denosing+SR methods on multi-degraded datasets. The degradation reconstruction loss can be a plug-and-play loss for existing SR methods to handle multiple degradations. The source code can be found at https://github.com/FVL2020/SRDRL.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []