Enhanced Dense Space Attention Network for Super-Resolution Construction From Single Input Image

2021 
In some applications, such as surveillance and biometrics, image enlargement is required to inspect small details on the image. One of the image enlargement approaches is by using convolutional neural network (CNN)-based super-resolution construction from a single image. The first CNN-based image super-resolution algorithm is the super-resolution CNN (SRCNN) developed in 2014. Since then, many researchers have proposed several versions of CNN-based algorithms for image super-resolution to improve the accuracy or reduce the model’s running time. Currently, some algorithms still suffered from the vanishing-gradient problem and relied on a large number of layers. Thus, the motivation of this work is to reduce the vanishing-gradient problem that can improve the accuracy, and at the same time, reduce the running time of the model. In this paper, an enhanced dense space attention network (EDSAN) model is proposed to overcome the problems. The EDSAN model adopted a dense connection and residual network to utilize all the features to correlate the low-level feature and high-level feature as much as possible. Besides, implementing the convolution block attention module (CBAM) layer and multiscale block (MSB) helped reduce the number of layers required to achieve comparable results. The model is evaluated through peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) metrics. EDSAN achieved the most significant improvement, about 1.42% when compared to the CRN model using the Set5 dataset at a scale factor of 3. Compared to the ERN model, EDSAN performed the best, with a 1.22% improvement made when using the Set5 dataset at a scale factor of 4. In terms of overall performance, EDSAN performed very well in all datasets at a scale factor of 2 and 3. In conclusion, EDSAN successfully solves the problems above, and it can be used in different applications such as biometric identification applications and real-time video applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    0
    Citations
    NaN
    KQI
    []