High-Order Attention Networks for Medical Image Segmentation

2020 
Segmentation is a fundamental task in medical image analysis. Current state-of-the-art Convolutional Neural Networks on medical image segmentation capture local context information using fixed-shape receptive fields and feature detectors with position-invariant weights, which limits the robustness to the variance of input, such as medical objects of variant sizes, shapes, and domains. In order to capture global context information, we propose High-order Attention (HA), a novel attention module with adaptive receptive fields and dynamic weights. HA allows each pixel to has its own global attention map that models its relationship to all other pixels. In particular, HA constructs the attention map through graph transduction and thus captures high relevant context information at high-order. Consequently, feature maps at each position are selectively aggregated as a weighted sum of feature maps at all positions. We further embed the proposed HA module into an efficient encoder-decoder structure for medical image segmentation, namely High-order Attention Network (HANet). Extensive experiments are conducted on four benchmark sets for three tasks, i.e., REFUGE and Drishti-GS1 for optic disc/cup segmentation, DRIVE for blood vessel segmentation, and LUNA for lung segmentation. The results justify the effectiveness of the new attention module for medical image segmentation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    2
    Citations
    NaN
    KQI
    []