Ghost Feature Network for Super-Resolution

2020 
Recent work has shown that approaches in single image super-resolution achieved great results with the development of deep convolutional neural networks. The deep convolutional networks transfer low-resolution (LR) images into high-resolution (HR), which depends on the complex non-linear mapping between plenty of convolutional layers. However, more filters used in convolutional layers means more computational costs, and the weights of models will be larger. To deal with these issues, we present a lightweight ghost features network (GFN) for super-resolution via cascading the residual-in-residual ghost blocks (RRGBs). Thus, it can rebuild images in a lightweight model while reducing the convolutional filters and the computational cost. Specifically, the RRGB consists of ghost modules which aim to extract feature maps using linear transformations than complicated convolutional transformations. Experiments on benchmarks demonstrate that the proposed GFN method outperforms state-of-the-art lightweight models while with a low computational cost.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []