Image super-resolution via enhanced multi-scale residual network

2021 
Abstract Recently, a very deep convolutional neural network (CNN) has achieved impressive results in image super-resolution (SR). In particular, residual learning techniques are widely used. However, the previously proposed residual block can only extract one single-level semantic feature maps of one single receptive field. Therefore, it is necessary to stack the residual blocks to extract higher-level semantic feature maps, which will significantly deepen the network. While a very deep network is hard to train and limits the representation for reconstructing the hierarchical information. Based on the residual block, we propose an enhanced multi-scale residual network (EMRN) to take advantage of hierarchical image features via dense connected enhanced multi-scale residual blocks (EMRBs). Specifically, the newly proposed residual block (EMRB) is capable of constructing multi-level semantic feature maps by a two-branch inception. The two-branch inception in our proposed EMRB consists of 2 convolutional layers and 4 convolutional layers in each branch respectively, therefore we have different ranges of receptive fields within one single EMRB. Meanwhile, the local feature fusion (LFF) is used in every EMRB to adaptively fuse the local feature maps extracted by the two-branch inception. Furthermore, global feature fusion (GFF) in EMRN is then used to obtain abundant useful features from previous EMRBs and subsequent ones in a holistic manner. Experiments on benchmark datasets suggest that our EMRN performs favorably over the state-of-the-art methods in reconstructing further superior super-resolution (SR) images.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    1
    Citations
    NaN
    KQI
    []