Double U-Nets for Image Segmentation by Integrating the Region and Boundary Information

2021 
The existing CNN-based segmentation methods use the object regions alone as the labels to train their networks, and the potentially useful boundaries annotated by radiologists are not used directly during the training. Thus, we proposed a framework of double U-Nets to integrate object regions and boundaries for more accurate segmentation. The proposed network consisted of a down-sampling path followed by two symmetric up-sampling paths. The down-sampling path learned the low-level features of regions and boundaries, and two up-sampling paths learned the high-level features of regions and boundaries, respectively. The outputs from the down-sampling path were concatenated with the corresponding ones from two up-sampling paths by skip connections. The outputs of double U-Nets were the predicted probability images of object regions and boundaries, and they were integrated to calculate the dice loss with the corresponding labels. The proposed double U-Nets were evaluated on two datasets: 247 radiographs for the segmentation of lungs, hearts, and clavicles, and 284 radiographs for the segmentation of pelvises. Compared with the baseline U-Net, our double U-Nets improved the mean dices and reduced the 90% Hausdorff distances for the “difficult” objects (lower lungs, clavicles, and pelvises), and the integration of “difficult” object regions and boundaries can improve the segmentation results compared with the use of object regions alone. However, for the “easy” objects (entire lungs and hearts) or “very difficult” objects (pelvises in lateral and implanted images), the integration did not improve the segmentation performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    1
    Citations
    NaN
    KQI
    []