Domain-specific data augmentation for segmenting MR images of fatty infiltrated human thighs with neural networks: Domain-Specific Data Augmentation

2019 
BACKGROUND: Fat-fraction has been established as a relevant marker for the assessment and diagnosis of neuromuscular diseases. For computing this metric, segmentation of muscle tissue in MR images is a first crucial step. PURPOSE: To tackle the high degree of variability in combination with the high annotation effort for training supervised segmentation models (such as fully convolutional neural networks). STUDY TYPE: Prospective. SUBJECTS: In all, 41 patients consisting of 20 patients showing fatty infiltration and 21 healthy subjects. Field Strength/Sequence: The T1 -weighted MR-pulse sequences were acquired on a 1.5T scanner. ASSESSMENT: To increase performance with limited training data, we propose a domain-specific technique for simulating fatty infiltrations (i.e., texture augmentation) in nonaffected subjects' MR images in combination with shape augmentation. For simulating the fatty infiltrations, we make use of an architecture comprising several competing networks (generative adversarial networks) that facilitate a realistic artificial conversion between healthy and infiltrated MR images. Finally, we assess the segmentation accuracy (Dice similarity coefficient). STATISTICAL TESTS: A Wilcoxon signed rank test was performed to assess whether differences in segmentation accuracy are significant. RESULTS: The mean Dice similarity coefficients significantly increase from 0.84-0.88 (P < 0.01) using data augmentation if training is performed with mixed data and from 0.59-0.87 (P < 0.001) if training is conducted with healthy subjects only. DATA CONCLUSION: Domain-specific data adaptation is highly suitable for facilitating neural network-based segmentation of thighs with feasible manual effort for creating training data. The results even suggest an approach completely bypassing manual annotations. LEVEL OF EVIDENCE: 4 Technical Efficacy: Stage 3.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    14
    Citations
    NaN
    KQI
    []