CoCa-GAN: Common-Feature-Learning-Based Context-Aware Generative Adversarial Network for Glioma Grading

2019 
Multi-modal structural MRI has been widely used for presurgical glioma grading for treatment planning. Despite providing complementary information, a complete set of high-resolution multi-modality data is costly and often impossible to acquire in clinical settings (although T1 MRI is more commonly acquired). To leverage more comprehensive multimodality information for better glioma grading instead of doing so with T1 MRI data only, we introduce a three-dimensional common feature learning-based context-aware generative adversarial network (CoCa-GAN) for multimodal MRI data synthesis based on T1 MRI and use the comprehensive features from a common feature space to achieve a clinically feasible glioma grading with limited imaging modality. The common feature space is first learned by simultaneously utilizing four MRI modalities with the adversarial learning and context-aware learning, where the inter-modality relationships and lesion-specific features can be explicitly encoded. Then, the domain (modality) invariant information represented in the common space is leveraged to synthesize the missing modalities for a joint prediction of glioma grades (high- vs. low-grade). Furthermore, Gradient-weighted Class Activation Mapping (GradCAM) is utilized to provide interpretability to the factors that contribute to the grading, for potential clinical usage. Results demonstrate that the common feature learning achieves more accurate glioma grading than simply using single modality data and leads to a comparable performance to that with complete modalities as inputs. Our method offers a highly feasible solution to clinical practice where multi-modality data is often unavailable.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    14
    Citations
    NaN
    KQI
    []