RSegNet: A Joint Learning Framework for Deformable Registration and Segmentation

2021 
Medical image segmentation and registration are two tasks to analyze the anatomical structures in clinical research. Still, deep-learning solutions utilizing the connections between segmentation and registration remain underdiscovered. This article designs a joint learning framework named RSegNet that can realize concurrent deformable registration and segmentation by minimizing an integrated loss function, including three parts: diffeomorphic registration loss, segmentation similarity loss, and dual-consistency supervision loss. The probabilistic diffeomorphic registration branch could benefit from the auxiliary segmentations available from the segmentation branch to achieve anatomical consistency and better deformation regularity by dual-consistency supervision. Simultaneously, the segmentation performance could also be improved by data augmentation based on the registration with well-behaved diffeomorphic guarantees. Experiments on the human brain 3-D magnetic resonance images have been implemented to demonstrate the effectiveness of our approach. We trained and validated RSegNet with 1000 images and tested its performances on four public datasets, which shows that our method successfully yields concurrent improvements of both segmentation and registration compared with separately trained networks. Specifically, our method can increase the accuracy of segmentation and registration by 7.0% and 1.4%, respectively, in terms of Dice scores.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []