Towards Information Diversity Through Separable Cascade Modules for Image Super Resolution

2021 
Convolution neural networks (CNNs) have been widely applied to learn the mapping function between low-resolution (LR) images and high-resolution (HR) images. With the aid of deep networks operating through cascading collaborative modules, several works have effectively improved image quality. In these works, the local cascading module is usually treated as an atomic term to extend, which is a convenient way to deepen network structures for feature extraction and information fusion. However, this limits the representation of later reconstruction layers because of its lack of abundant features if local atomic modules can output only one type of information. For instance, residual blocks only contain residual information. In this paper, we propose a separable mechanism in module design to increase the representation capability of deep neural networks. For the sake of diverse features, we obtain two-stream features with four separable modules based on residual learning and attention mechanisms. Through a contiguous memory (CM) mechanism, it ultimately combines low-level features with high-level features. Similar to the residual-in-residual (RIR) structure, we propose an attention-in-attention (AIA) framework to deepen our networks. Experimental results demonstrate the effectiveness of our method when applied to several image datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []