A Novel CNN Segmentation Framework Based on Using New Shape and Appearance Features

2018 
To improve the accuracy of segmenting medical images from different modalities we propose to integrate three types of comprehensive quantitative image descriptors with a deep 3D convolutional neural network. The descriptors include: (i) a Gibbs energy for a prelearned 7 th -order Markov-Gibbs random field (MGRF) model of visual appearance, (ii) a relearned adaptive shape prior model, and a first-order conditional random field model of visual appearance of regions at each current stage of segmentation. The neural network fuses the computed descriptors, together with the raw image data, for obtaining the final voxel-wise probabilities of the goal regions. Quantitative assessment of our framework in terms of Dice similarity coefficients, 95-percentile bidirectional Hausdorff distances, and percentage volume differences confirms the high accuracy of our model on 95 CT lung images $(98.37\pm_{0.68}\%,2.79\pm_{1.32}\ mm,3.94\pm_{2.11}\%)$ and 95 diffusion weighted kidney MRI $(96.65\pm_{2.15}\%,4.32\pm_{3.09}\ mm,5.61\pm_{3.37}\%)$ , respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    5
    Citations
    NaN
    KQI
    []