PFFN: Progressive Feature Fusion Network for Lightweight Image Super-Resolution

2021 
Recently, convolutional neural network (CNN) has been the core ingredient of modern models, triggering the surge of deep learning in super-resolution (SR). Despite the great success of these CNN-based methods which are prone to be deeper and heavier, it is impracticable to directly apply these methods for some low-budget devices due to the superfluous computational overhead. To alleviate this problem, a novel lightweight SR network named progressive feature fusion network (PFFN) is developed to seek for better balance between performance and running efficiency. Specifically, to fully exploit the feature maps, a novel progressive attention block (PAB) is proposed as the main building block of PFFN. The proposed PAB adopts several parallel but connected paths with pixel attention, which could significantly increase the receptive field of each layer, distill useful information and finally learn more discriminative feature representations. In PAB, a powerful dual attention module (DAM) is further incorporated to provide the channel and spatial attention mechanism in fairly lightweight manner. Besides, we construct a pretty concise and effective upsampling module with the help of multi-scale pixel attention, named MPAU. All of the above modules ensure the network can benefit from attention mechanism while still being lightweight enough. Furthermore, a novel training strategy following the cosine annealing learning scheme is proposed to maximize the representation ability of the model. Comprehensive experiments show that our PFFN achieves the best performance against all existing lightweight state-of-the-art SR methods with less number of parameters and even performs comparably to computationally expensive networks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    0
    Citations
    NaN
    KQI
    []