AFT-Net: Active Fusion-Transduction for Multi-stream Medical Image Segmentation

2020 
As an important building block in automatic medical applications, image segmentation has made a great progress due to the data-driving mechanism of deep architecture. Recently, numerous methods have been proposed to boost the segmentation performance based on U-shape network. However, they often built feature encoders with only one data routine, which have limited the representation ability of the networks. Although some methods applied multiple learning paths to fix this problem, the deep supervision techniques are required to monitor the training status at individual path, which may bring extra burden to practical usage of the algorithm. Additionally, under these frameworks, the semantic gap between different paths may interfere with model's learning performance, and the potential transduction ability of skip connections still needs further investigation. To address these issues, we introduce a novel medical image segmentation framework, namely AFT-Net, in which an attention-based data fusion model is proposed to effectively cooperate with multi-stream encoder. By progressively accumulating the features from different paths, our method can establish meaningful connections between structural and semantic features, while keeping an integral and flexible layout without deeply customized supervisions. Extensive experiments on two medical image data sets demonstrate that our method is able to acquire image features with both diversity and quality, thereby outperforms current state-of-the-art segmentation methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    1
    Citations
    NaN
    KQI
    []