SGUNet: Style-Guided UNet for Adversely Conditioned Fundus Image Super-Resolution

2021 
Abstract Image super-resolution from low-resolution fundus image has valuable applications in clinical practices. The popular methods yield unsatisfactory results when the fundus images are contaminated due to the bleeding or plaques caused by eye diseases. To this end, we propose a style-guided UNet (SGUNet) which incorporates a series of style-guided U-shape block (SUB) for fundus image super-resolution. Each SUB consists of trunk and mask branches. The proposed trunk branch is a U-shape structure that intends to enlarge the receptive field by down-sampling via large-stride convolution, and fuses the complementary information under the different receptive fields. The mask branch then dynamically estimates the relative importance of individual potential styles to reweigh the feature maps according to the significance of the potential styles. To fully leverage the hierarchical features, a dense feature fusion scheme is introduced by concatenating the output of preceding SUBs. We extensively validate the proposed network on low-resolution retina dataset with adversely affected by diseases. The experimental results demonstrate that our SGUNet achieves superior performance with excellent robustness and high accuracy by comparing with six popular methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    0
    Citations
    NaN
    KQI
    []