Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation.We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing.Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively.We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided brachytherapy. However, most current studies are concentrated on single-needle detection by only using a small number of images with a needle, regardless of the massive database of US images without needles. In this paper, we propose a workflow of multi-needle detection via considering the images without needles as auxiliary. Specifically, we train position-specific dictionaries on 3D overlapping patches of auxiliary images, where we developed an enhanced sparse dictionary learning method by integrating spatial continuity of 3D US, dubbed order-graph regularized dictionary learning (ORDL). Using the learned dictionaries, target images are reconstructed to obtain residual pixels which are then clustered in every slice to determine the centers. With the obtained centers, regions of interest (ROIs) are constructed via seeking cylinders. Finally, we detect needles by using the random sample consensus algorithm (RANSAC) per ROI and then locate the tips by finding the sharp intensity drops along the detected axis for every needle. Extensive experiments are conducted on a prostate data set of 70/21 patients without/with needles. Visualization and quantitative results show the effectiveness of our proposed workflow. Specifically, our approach can correctly detect 95% needles with a tip location error of 1.01 mm on the prostate dataset. This technique could provide accurate needle detection for US-guided high-dose-rate prostate brachytherapy and facilitate the clinical workflow.
Abstract Purpose Stent has often been used as an internal surrogate to monitor intrafraction tumor motion during pancreatic cancer radiotherapy. Based on the stent contours generated from planning CT images, the current intrafraction motion review (IMR) system on Varian TrueBeam only provides a tool to verify the stent motion visually but lacks quantitative information. The purpose of this study is to develop an automatic stent recognition method for quantitative intrafraction tumor motion monitoring in pancreatic cancer treatment. Methods A total of 535 IMR images from 14 pancreatic cancer patients were retrospectively selected in this study, with the manual contour of the stent on each image serving as the ground truth. We developed a deep learning–based approach that integrates two mechanisms that focus on the features of the segmentation target. The objective attention modeling was integrated into the U‐net framework to deal with the optimization difficulties when training a deep network with 2D IMR images and limited training data. A perceptual loss was combined with the binary cross‐entropy loss and a Dice loss for supervision. The deep neural network was trained to capture more contextual information to predict binary stent masks. A random‐split test was performed, with images of ten patients (71%, 380 images) randomly selected for training, whereas the rest of four patients (29%, 155 images) were used for testing. Sevenfold cross‐validation of the proposed PAUnet on the 14 patients was performed for further evaluation. Results Our stent segmentation results were compared with the manually segmented contours. For the random‐split test, the trained model achieved a mean (±standard deviation) stent Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), the center‐of‐mass distance (CMD), and volume difference were 0.96 (±0.01), 1.01 (±0.55) mm, 0.66 (±0.46) mm, and 3.07% (±2.37%), respectively. The sevenfold cross‐validation of the proposed PAUnet had the mean (±standard deviation) of 0.96 (±0.02), 0.72 (±0.49) mm, 0.85 (±0.96) mm, and 3.47% (±3.27%) for the DSC, HD95, CMD, and . Conclusion We developed a novel deep learning–based approach to automatically segment the stent from IMR images, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for quantitative intrafraction motion monitoring in pancreatic cancer radiotherapy.
Automated 3D breast ultrasound (ABUS) has substantial potential in breast imaging. ABUS appears to be beneficial because of its outstanding reproducibility and reliability, especially for screening women with dense breasts. However, due to the high number of slices in 3D ABUS, it requires lengthy screening time for radiologists, and they may miss small and subtle lesions. In this work, we propose to use a 3D Mask R-CNN method to automatically detect the location of the tumor and simultaneously segment the tumor contour. The performance of the proposed algorithm was evaluated using 25 patients' data with ABUS image and ground truth contours. To further access the performance of the proposed method, we quantified the intersection over union (IoU), Dice similarity coefficient (DSC), and center of mass distance (CMD) between the ground truth and segmentation. The resultant IoU 96% ± 2%, DSC 84% ± 3%, and CMD 1.95 ± 0.89 mm respectively, which demonstrated the high accuracy of tumor detection and 3D volume segmentation of the proposed Mask R-CNN method. We have developed a novel deep learning-based method and demonstrated its capability of being used as a useful tool for computer-aided diagnosis and treatment.
The thyroid gland is a butterfly-shaped organ and belongs to the endocrine system. The abnormality in shape and volume of thyroid can reveal the occurrence of various diseases. Ultrasound (US) imaging is currently the most popular diagnostic tool for diagnosing thyroid diseases. However, most physicians would still make decisions depending on computed tomography (CT) because of its excellent resolution to show more details of the thyroid and its surroundings. The thyroid CT imaging before surgery is important because it can assist in determining the anatomical distribution of a lesion and its involvement in adjacent organs or tissues. However, precise segmentation of the thyroid relies heavily on the experience of the physician and is very time-consuming. In this work, we propose to use a 3D deep attention U-Net method to segment the thyroid from CT image automatically. The quantitative evaluation of the segmentation performance of the proposed method, we calculated the Dice similarity coefficient (DSC), sensitivity, specificity, and mean surface distance (MSD) indices between the ground truth and automatic segmentation We demonstrated high accuracy and robustness of the proposed deep-learning-based segmentation method visually and quantitatively. The resultant DSC, precision, and recall were 85% ± 6%, 86% ± 5% and 90% ± 5%, respectively.
Lesion-specific myocardial ischemia is a common heart disorder and a significant cause of cardiovascular morbidity and mortality. It alters left ventricular myocardial thickness progressively. Clinical decision-making is based on Fractional flow reserve (FFR), which is invasive and may prolong the surgery time and with extra radiation exposure. Although coronary computed tomography angiogram (CCTA) has high accuracy and negative predictive value (NPV) in the evaluation of coronary artery disease (CAD), it has low specificity in the diagnosis of lesion-specific myocardial ischemia. We propose a learning method for the assessment of lesion-specific myocardial ischemia using noninvasive CCTA and radiomics study. Sixty patients with suspected or known to have CAD were enrolled. The left ventricular myocardial on the CCTA was manually segmented. One hundred radiomic features of left ventricular myocardial were extracted. The most informative and non-redundant features were selected to train a Support Vector Machine (SVM) is to differentiate lesion-specific myocardial ischemia and without lesion-specific myocardial ischemia (normal). Analysis of the predictions showed that the reported method consistently predicted lesion-specific myocardial ischemia with the accuracy of 0.8550 ± 0.0333 and area under the receiver operating characteristic curve (AUC) 0.8952 ± 0.0370. This study shows that LVMradiomic features derived from CCTA data can be used to classify lesion specific myocardial ischemia. The radiomic features of left ventricular myocardial from CCTA could be a useful tool for determining lesion specific myocardial ischemia.
Purpose Segmentation of left ventricular myocardium (LVM) in coronary computed tomography angiography (CCTA) is important for diagnosis of cardiovascular diseases. Due to poor image contrast and large variation in intensity and shapes, LVM segmentation for CCTA is a challenging task. The purpose of this work is to develop a region‐based deep learning method to automatically detect and segment the LVM solely based on CCTA images. Methods We developed a 3D deeply supervised U‐Net, which incorporates attention gates (AGs) to focus on the myocardial boundary structures, to segment LVM contours from CCTA. The deep attention U‐Net (DAU‐Net) was trained on the patients’ CCTA images, with a manual contour‐derived binary mask used as the learning‐based target. The network was supervised by a hybrid loss function, which combined logistic loss and Dice loss to simultaneously measure the similarities and discrepancies between the prediction and training datasets. To evaluate the accuracy of the segmentation, we retrospectively investigated 100 patients with suspected or confirmed coronary artery disease (CAD). The LVM volume was segmented by the proposed method and compared with physician‐approved clinical contours. Quantitative metrics used were Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD), the center of mass distance (CMD), and volume difference (VOD). Results The proposed method created contours with very good agreement to the ground truth contours. Our proposed segmentation approach is benchmarked primarily using fivefold cross validation. Model prediction correlated and agreed well with manual contour. The mean DSC of the contours delineated by our method was 91.6% among all patients. The resultant HD was 6.840 ± 4.410 mm. The proposed method also resulted in a small CMD (1.058 ± 1.245 mm) and VOD (1.640 ± 1.777 cc). Among all patients, the MSD and RMSD were 0.433 ± 0.209 mm and 0.724 ± 0.375 mm, respectively, between ground truth and LVM volume resulting from the proposed method. Conclusions We developed a novel deep learning‐based approach for the automated segmentation of the LVM on CCTA images. We demonstrated the high accuracy of the proposed learning‐based segmentation method through comparison with ground truth contour of 100 clinical patient cases using six quantitative metrics. These results show the potential of using automated LVM segmentation for computer‐aided delineation of CADs in the clinical setting.
Purpose Automatic breast ultrasound (ABUS) imaging has become an essential tool in breast cancer diagnosis since it provides complementary information to other imaging modalities. Lesion segmentation on ABUS is a prerequisite step of breast cancer computer‐aided diagnosis (CAD). This work aims to develop a deep learning‐based method for breast tumor segmentation using three‐dimensional (3D) ABUS automatically. Methods For breast tumor segmentation in ABUS, we developed a Mask scoring region‐based convolutional neural network (R‐CNN) that consists of five subnetworks, that is, a backbone, a regional proposal network, a region convolutional neural network head, a mask head, and a mask score head. A network block building direct correlation between mask quality and region class was integrated into a Mask scoring R‐CNN based framework for the segmentation of new ABUS images with ambiguous regions of interest (ROIs). For segmentation accuracy evaluation, we retrospectively investigated 70 patients with breast tumor confirmed with needle biopsy and manually delineated on ABUS, of which 40 were used for fivefold cross‐validation and 30 were used for hold‐out test. The comparison between the automatic breast tumor segmentations and the manual contours was quantified by I) six metrics including Dice similarity coefficient (DSC), Jaccard index, 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and center of mass distance (CMD); II) Pearson correlation analysis and Bland–Altman analysis. Results The mean (median) DSC was 85% ± 10.4% (89.4%) and 82.1% ± 14.5% (85.6%) for cross‐validation and hold‐out test, respectively. The corresponding HD95, MSD, RMSD, and CMD of the two tests was 1.646 ± 1.191 and 1.665 ± 1.129 mm, 0.489 ± 0.406 and 0.475 ± 0.371 mm, 0.755 ± 0.755 and 0.751 ± 0.508 mm, and 0.672 ± 0.612 and 0.665 ± 0.729 mm. The mean volumetric difference (mean and ± 1.96 standard deviation) was 0.47 cc ([−0.77, 1.71)) for the cross‐validation and 0.23 cc ([−0.23 0.69]) for hold‐out test, respectively. Conclusion We developed a novel Mask scoring R‐CNN approach for the automated segmentation of the breast tumor in ABUS images and demonstrated its accuracy for breast tumor segmentation. Our learning‐based method can potentially assist the clinical CAD of breast cancer using 3D ABUS imaging.
Abstract The tumor ecosystem with heterogeneous cellular compositions and the tumor microenvironment has increasingly become the focus of cancer research in recent years. The extracellular matrix (ECM), the major component of the tumor microenvironment, and its interactions with the tumor cells and stromal cells have also enjoyed tremendously increased attention. Like the other components of the tumor microenvironment, the ECM in solid tumors differs significantly from that in normal organs and tissues. We review recent studies of the complex roles the tumor ECM plays in cancer progression, from tumor initiation, growth to angiogenesis and invasion. We highlight that the biomolecular, biophysical, and mechanochemical interactions between the ECM and cells not only regulate the steps of cancer progression, but also affect the efficacy of systemic cancer treatment. We further discuss the strategies to target and modify the tumor ECM to improve cancer therapy.