Unsupervised Real-World Image Super Resolution via Domain-Distance Aware Training

2021 
These days, unsupervised super-resolution (SR) is soaring due to its practical and promising potential in real scenarios. The philosophy of off-the-shelf approaches lies in the augmentation of unpaired data, i.e. first generating synthetic low-resolution (LR) images ${\mathcal{Y}^g}$ corresponding to real-world high-resolution (HR) images ${\mathcal{X}^r}$ in the real-world LR domain ${\mathcal{Y}^r}$, and then utilizing the pseudo pairs $\left\{ {{\mathcal{Y}^g},{\mathcal{X}^r}} \right\}$ for training in a supervised manner. Unfortunately, since image translation itself is an extremely challenging task, the SR performance of these approaches is severely limited by the domain gap between generated synthetic LR images and real LR images. In this paper, we propose a novel domain-distance aware super-resolution (DASR) approach for unsupervised real-world image SR. The domain gap between training data (e.g. ${\mathcal{Y}^g}$) and testing data (e.g. ${\mathcal{Y}^r}$) is addressed with our domain-gap aware training and domain-distance weighted supervision strategies. Domain-gap aware training takes additional benefit from real data in the target domain while domain-distance weighted supervision brings forward the more rational use of labeled source domain data. The proposed method is validated on synthetic and real datasets and the experimental results show that DASR consistently outperforms state-of-the-art unsupervised SR approaches in generating SR outputs with more realistic and natural textures. Codes are available at https://github.com/ShuhangGu/DASR.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    61
    References
    2
    Citations
    NaN
    KQI
    []