Automated deep segmentation of healthy organs in PSMA PET/CT images

2021 
1410 Objectives: As PET imaging of prostate-specific membrane antigen (PSMA) becomes more widely-adopted following FDA approval, the role of healthy organ segmentation with high PSMA expression is expected to increase. For example, significant correlations were found between pre-therapy PSMA-PET standardized uptake values (SUVs) in healthy organs and absorbed dose during therapy (Violet et al., 2019). On the other hand, segmenting regions of physiological uptake can be used to better estimate abnormal uptake, which has been shown to be correlated with outcome in patients receiving [177Lu]Lu-PSMA-617 radioligand therapy (Seifert et al., 2020). Manual segmentation of organs is very labor-intensive and often not feasible in large research trials. The objective of this work was to evaluate the ability of convolutional neural networks to perform fully-automated and robust segmentation and classification of organs with high tracer uptake in PSMA PET images. Methods: On 100 clinically negative 18F-DCFPyL (PSMA) PET/CT images under clinical trial (NCT02899312), PSMA-accumulating organs were segmented into 14 classes by experienced nuclear medicine physicians: lacrimal glands (x2), parotid glands (x2), submandibular glands (x2), tubarial gland, sublingual gland, spleen, liver, kidneys (x2), bowel, and bladder. The segmentation was performed in MIM (MIM Software, USA), and leveraged a semi-automatic approach that involved manual region selection followed by fixed thresholding (based on SUVmax), clustering, and manual correction where needed. The images were randomly divided into training (N=85), validation (N=5) and test (N=10) sets. A separate convolutional U-net implemented in Tensorflow was trained to perform segmentation of each organ in the test set. The inputs to the U-nets were 192 x 192-pixel axial slices (3.64 x 3.64 mm/pixel) with two channels, corresponding to PET and CT images, 128 slices per batch. The target output was a binary mask of the organ of interest. To partially mitigate the class imbalance, we used the recently proposed soft Dice loss function (Li et al., 2020), which was minimized using the Adam algorithm. Two metrics of segmentation quality were computed on the test set: 1) the Dice similarity coefficient, which ranges between 0 and 1 and measures the overlap between true and predicted segmentations; 2) the percent difference in total tracer uptake (TTU) between the predicted and reference segmentations. TTU was computed as an integral of standardized uptake values (SUVs) over the segmentation volume. Results: In repeated trials, relatively good segmentations were obtained for the 12 organs (Fig 1). The mean (N=10, 3 training trials) Dice coefficients were 0.83 for lacrimal glands, 0.90 for parotid glands, 0.83 for submandibular glands, 0.72 for spleen, 0.94 for liver, 0.89 for kidneys, 0.67 for bowel, and 0.86 for bladder (Fig. 2). The standard deviations were on the order of 1-2% of the mean Dice values. The relatively low Dice score for bowel was likely due to the high anatomical variability of this organ. The loss function could not be sufficiently minimized for the tubular gland and sublingual gland, likely due to their relatively low 18F-DCFPyL uptake. The mean absolute error of TTU values were 8.20% for lacrimal glands, 5.30% for parotid glands, 12.5% for submandibular glands, 23.1% for spleen, 3.62% for liver, 11.8% for kidneys, 26.8% for bowel, and 4.52% for bladder (Fig. 2). Conclusions: Our results demonstrate the feasibility of using convolutional neural nets to perform automated PET/CT segmentation of organs in PSMA-PET images, for dosimetry calculations and other diagnostic tasks. Bowel and spleen segmentations could likely be improved by adding more subjects to the training set or using data augmentation techniques. Future work focuses on evaluating fully 3D U-Nets that perform segmentation of multiple organs simultaneously, as well as testing different loss functions that can better account for class imbalance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    3
    Citations
    NaN
    KQI
    []