Deep Inception-Residual Laplacian Pyramid Networks for Accurate Single-Image Super-Resolution
2019
With exploiting contextual information over large image regions in an efficient way, the deep convolutional neural network has shown an impressive performance for single-image super-resolution (SR). In this paper, we propose a new deep convolutional network by cascading multiple well-designed inception-residual blocks within the deep Laplacian pyramid framework to progressively restore the missing high-frequency details in the low-resolution images. By optimizing our network structure, the trainable depth of our proposed network gains a significant improvement, which in turn improves super-resolving accuracy. However, the saturation and degradation of training accuracy remains a critical problem. With regard to this, we propose an effective two-stage training strategy, in which we first use the images downsampled from the ground-truth high-resolution (HR) images to pretrain the inception-residual blocks on each pyramid level with an extremely high learning rate enabled by gradient clipping, and then the original ground-truth HR images are used to fine-tune all the pretrained inception-residual blocks for obtaining our final SR models. Furthermore, we present a new loss function operating in both image space and local rank space to optimize our network for exploiting the contextual information among different output components. Extensive experiments on benchmark data sets validate that the proposed method outperforms the existing state-of-the-art SR methods in terms of the objective evaluation as well as the visual quality.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
51
References
9
Citations
NaN
KQI