RefineU-Net: Improved U-Net with Progressive Global Feedbacks and Residual Attention Guided Local Refinement for Medical Image Segmentation

2020 
Abstract Motivated by the recent advances in medical image segmentation using a fully convolutional network (FCN) called U-Net and its modified variants, we propose a novel improved FCN architecture called RefineU-Net. The proposed RefineU-Net consists of three modules: encoding module (EM), global refinement module (GRM) and local refinement module (LRM). EM is backboned by pretrained VGG-16 using ImageNet. GRM is proposed to generate intermediate layers in the skip connections in U-Net. It progressively upsamples the top side output of EM and fuses the resulted upsampled features with the side outputs of EM at each resolution level. Such fused features combine the global context information in shallow layers and the semantic information in deep layers for global refinement. Subsequently, to facilitate local refinement, LRM is proposed using residual attention gate (RAG) to generate discriminative attentive features to be concatenated with the decoded features in the expansive path of U-Net. Three modules are trained jointly in an end-to-end manner thereby both global and local refinement are performed complementarily. Extensive experiments conducted on four public datasets of polyp and skin lesion segmentation show the superiority of the proposed RefineU-Net to multiple state-of-the-art related methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    8
    Citations
    NaN
    KQI
    []