MMFNet: A Multi-modality MRI Fusion Network for Segmentation of Nasopharyngeal Carcinoma

2020 
Abstract Segmentation of nasopharyngeal carcinoma (NPC) from Magnetic Resonance Images (MRI) is a crucial prerequisite for NPC radiotherapy. However, manual segmenting of NPC is time-consuming and labor-intensive. In addition, single-modality MRI generally cannot provide enough information for accurate delineation. Therefore, a novel multi-modality MRI fusion network (MMFNet) is proposed to complete accurate segmentation of NPC via utilizing T1, T2 and contrast-enhanced T1 of MRI. The backbone of MMFNet is designed as a multi-encoder-based network, consisting of several encoders and one decoder, where the encoders aim to capture modality-specific features and the decoder is to obtain fused features for NPC segmentation. A fusion block consisting of a 3D Convolutional Block Attention Module (3D-CBAM) and a residual fusion block (RFBlock) is presented. The 3D-CBAM recalibrates low-level features captured from modality-specific encoders to highlight both informative features and regions of interest (ROIs) whereas the RFBlock fuses re-weighted features to keep balance between fused ones and high-level features from decoder. Moreover, a training strategy named self-transfer is also proposed which utilizes pre-trained modality-specific encoders to initialize multi-encoder-based network in order to make full mining of individual information from multi-modality MRI. The proposed method based on multi-modality MRI can effectively segment NPC and its advantages are validated by extensive experiments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    51
    References
    24
    Citations
    NaN
    KQI
    []