Echocardiography plays a central role in the diagnosis of infective endocarditis (IE). In recent years, additional imaging techniques have begun to challenge the conventional approach. We present a case where the use of transthoracic/transoesophageal echocardiography (TTE/TOE) in suspected IE failed to identify an extensive periannular abscess, later identified by
Abstract PURPOSE Translation of AI algorithms into clinical practice is significantly limited by lack of large individual hospital-based datasets with expert annotations. Current methods for generation of annotated imaging data are significantly limited due to inefficient imaging data transfer, complicated annotation software, and time required for experts to generate ground truth information. We incorporated AI tools for auto-segmentation of gliomas into PACS that is used at our institution for reading clinical studies and developed a workflow for annotation of images and development of volumetric segmentations in neuroradiology clinical workflow. Material: 1990 patients from Yale Radiation Oncology Registry (2012-2019) were identified. Segmentations were performed using a UNETR algorithm trained on BRaTS 2021 and an internal dataset of manually segmented tumors. Segmentations were validated by a board-certified neuro-radiologist and natively embedded PyRadiomics in PACS was used for feature extraction. RESULTS In 7 Months (05/2021 - 08/2021, 03/2022 - 05/2022) segmentations and annotations were performed in 835 patients (322 female, 467 male, 46 unknown, mean age 53 yrs). Dataset includes 275 Grade 4 Gliomas (54 Grade 3, 100 Grade 2, 31 Grade 1, 375 unknown). Molecular subtypes include IDH (113 mutated, 498 wildtype, 2 Equivocal, 222 unknown), 1p/19q (87 deleted or co-deleted, 122 intact, 626 unknown), MGMT promotor (182 methylated, 95 partially methylated, 275 unmethylated, 283 unknown), EGFR (76 amplified, 177 not amplified, 582 unknown), ATRX (40 mutated, 157 retained, 638 unknown), Ki-67 (616 known, 219 unknown) and p53 (549 known, 286 unknown). Classification of gliomas between grade 3/4 and grade 1/2, yielded AUC of 0.85. CONCLUSION We developed a method for incorporation of volumetric segmentation, feature extraction, and classification that is easily incorporated into neuroradiology workflow. These tools allowed us to annotate over 100 gliomas per month, thus establishing a proof of concept for rapid development of annotated imaging database for AI applications.
Abstract Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models.
Abstract Background and Purpose Current auto-segmentation models of brain structures, UNets and nnUNets, have limitations, including the inability to segment images that are not represented during training and lack of computational efficiency. 3D capsule networks (CapsNets) have the potential to address these limitations. Methods We used 3430 brain MRIs, acquired in a multi-institutional study, to train and validate our models. We compared our CapsNet with standard alternatives, UNets and nnUNets, based on segmentation efficacy (Dice scores), segmentation performance when the image is not well-represented in the training data, performance when the training data are limited, and computational efficiency including required memory and computational speed. Results The CapsNet segmented the third ventricle, thalamus, and hippocampus with Dice scores of 95%, 94%, and 92%, respectively, which were within 1% of the Dice scores of UNets and nnUNets. The CapsNet significantly outperformed UNets in segmenting images that are not well-represented in the training data, with Dice scores 30% higher. The computational memory required for the CapsNet is less than a tenth of the memory required for UNets or nnUNets. The CapsNet is also more than 25% faster to train compared with UNet and nnUNet. Conclusion We developed and validated a CapsNet that is effective in segmenting brain images, can segment images that are not well-represented in the training data, and are computationally efficient compared with alternatives.
Abstract When an auto-segmentation model needs to be applied to a new segmentation task, multiple decisions should be made about the pre-processing steps and training hyperparameters. These decisions are cumbersome and require a high level of expertise. To remedy this problem, I developed self-configuring CapsNets (scCapsNets) that can scan the training data as well as the computational resources that are available, and then self-configure most of their design options. In this study, we developed a self-configuring capsule network that can configure its design options with minimal user input. We showed that our self-configuring capsule netwrok can segment brain tumor components, namely edema and enhancing core of brain tumors, with high accuracy. Out model outperforms UNet-based models in the absence of data augmentation, is faster to train, and is computationally more efficient compared to UNet-based models.
Segmenting medical images is critical to facilitating both patient diagnoses and quantitative research. A major limiting factor is the lack of labeled data, as obtaining expert annotations for each new set of imaging data or task can be expensive, labor intensive, and inconsistent among annotators. To address this, we present CUTS (Contrastive and Unsupervised Training for multi-granular medical image Segmentation), a fully unsupervised deep learning framework for medical image segmentation to better utilize the vast majority of imaging data that are not labeled or annotated. CUTS works by leveraging a novel two-stage approach. First, it produces an image-specific embedding map via intra-image contrastive loss and a local patch reconstruction objective. Second, these embeddings are partitioned at dynamic levels of granularity that correspond to the data topology. Ultimately, CUTS yields a series of coarse-to-fine-grained segmentations that highlight image features at various scales. We apply CUTS to retinal fundus images and two types of brain MRI images in order to delineate structures and patterns at different scales, providing distinct information relevant for clinicians. When evaluated against predefined anatomical masks at a given granularity, CUTS demonstrates improvements ranging from 10% to 200% on dice coefficient and Hausdorff distance compared to existing unsupervised methods. Further, CUTS shows performance on par with the latest Segment Anything Model which was pre-trained in a supervised fashion on 11 million images and 1.1 billion masks. In summary, with CUTS we demonstrate that medical image segmentation can be effectively solved without relying on large, labeled datasets or vast computational resources.
The molecular profile of gliomas is a prognostic indicator for survival, driving clinical decision-making for treatment. Pathology-based molecular diagnosis is challenging because of the invasiveness of the procedure, exclusion from neoadjuvant therapy options, and the heterogeneous nature of the tumor.
PURPOSE:
We performed a systematic review of algorithms that predict molecular subtypes of gliomas from MR Imaging.
DATA SOURCES:
Data sources were Ovid Embase, Ovid MEDLINE, Cochrane Central Register of Controlled Trials, Web of Science.
STUDY SELECTION:
Per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 12,318 abstracts were screened and 1323 underwent full-text review, with 85 articles meeting the inclusion criteria.
DATA ANALYSIS:
We compared prediction results from different machine learning approaches for predicting molecular subtypes of gliomas. Bias analysis was conducted for each study, following the Prediction model Risk Of Bias Assessment Tool (PROBAST) guidelines.
DATA SYNTHESIS:
Isocitrate dehydrogenase mutation status was reported with an area under the curve and accuracy of 0.88 and 85% in internal validation and 0.86 and 87% in limited external validation data sets, respectively. For the prediction of O6-methylguanine-DNA methyltransferase promoter methylation, the area under the curve and accuracy in internal validation data sets were 0.79 and 77%, and in limited external validation, 0.89 and 83%, respectively. PROBAST scoring demonstrated high bias in all articles.
LIMITATIONS:
The low number of external validation and studies with incomplete data resulted in unequal data analysis. Comparing the best prediction pipelines of each study may introduce bias.
CONCLUSIONS:
While the high area under the curve and accuracy for the prediction of molecular subtypes of gliomas are reported in internal and external validation data sets, limited use of external validation and the increased risk of bias in all articles may present obstacles for clinical translation of these techniques.
Objective This study aims to investigate the association between specific imaging parameters, namely, the Evans index (EI) and ventricular volume (VV), and the variation in gait speed observed in patients with idiopathic normal pressure hydrocephalus (iNPH) before and after cerebrospinal fluid (CSF) removal/lumbar drain (LD). Furthermore, it seeks to identify which imaging parameters are the most reliable predictors for significant improvements in gait speed post procedure. Methods In this retrospective analysis, the study measured the gait speed of 35 patients diagnosed with idiopathic normal pressure hydrocephalus (iNPH) before and after they underwent CSF removal. Before lumbar drain (LD), brain images were segmented to calculate the Evans index and ventricular volume. The study explored the relationship between these imaging parameters (the Evans index and ventricular volume) and the improvement in gait speed following CSF removal. Patients were divided into two categories based on the degree of improvement in gait speed, and we compared the imaging parameters between these groups. Receiver operating characteristic (ROC) curve analysis was employed to determine the optimal imaging parameter thresholds predictive of gait speed enhancement. Finally, the study assessed the predictive accuracy of these thresholds for identifying patients likely to experience improved gait speed post-LD. Results Following CSF removal/lumbar drain, the participants significantly improved in gait speed, as indicated by a paired sample t-test (p-value = 0.0017). A moderate positive correlation was observed between the imaging parameters (EI and VV) and the improvement in gait speed post-LD. Significant differences were detected between the two patient groups regarding EI, VV, and a composite score (statistical test value = 3.1, 2.8, and 2.9, respectively; p-value < 0.01). Receiver operating characteristic (ROC) curve analysis identified the optimal thresholds for the EI and VV to be 0.39 and 110.78 cm³, respectively. The classification based on these thresholds yielded significant associations between patients displaying favorable imaging parameters and those demonstrating improved gait speed post-LD, with chi-square (χ²) values of 8.5 and 7.1, respectively, and p-values < 0.01. Furthermore, these imaging parameter thresholds had a 74% accuracy rate in predicting patients who would improve post-LD. Conclusion The study demonstrates that ventricle volume and the Evans index can significantly predict gait speed improvement after lumbar drain (LD) in patients with iNPH.
Objectives: Despite growing enthusiasm surrounding the utility of clinical informatics to improve cancer outcomes, data availability remains a persistent bottleneck to progress. Difficulty combining data with protected health information often limits our ability to aggregate larger more representative datasets for analysis. With the rise of machine learning techniques that require increasing amounts of clinical data, these barriers have magnified. Here, we review recent efforts within clinical informatics to address issues related to safely sharing cancer data. Methods: We carried out a narrative review of clinical informatics studies related to sharing protected health data within cancer studies published from 2018-2022, with a focus on domains such as decentralized analytics, homomorphic encryption, and common data models. Results: Clinical informatics studies that investigated cancer data sharing were identified. A particular focus of the search yielded studies on decentralized analytics, homomorphic encryption, and common data models. Decentralized analytics has been prototyped across genomic, imaging, and clinical data with the most advances in diagnostic image analysis. Homomorphic encryption was most often employed on genomic data and less on imaging and clinical data. Common data models primarily involve clinical data from the electronic health record. Although all methods have robust research, there are limited studies showing wide scale implementation. Conclusions: Decentralized analytics, homomorphic encryption, and common data models represent promising solutions to improve cancer data sharing. Promising results thus far have been limited to smaller settings. Future studies should be focused on evaluating the scalability and efficacy of these methods across clinical settings of varying resources and expertise.
This study aimed to assess if quantitative diffusion magnetic resonance imaging analysis would improve prognostication of individual patients with severe traumatic brain injury.We analyzed images of 30 healthy controls to extract normal fractional anisotropy ranges along 18 white-matter tracts. Then, we analyzed images of 33 patients, compared their fractional anisotropy values with normal ranges extracted from controls, and computed severity of injury to white-matter tracts. We also asked 2 neuroradiologists to rate severity of injury to different brain regions on fluid-attenuated inversion recovery and susceptibility-weighted imaging. Finally, we built 3 models: (1) fed with neuroradiologists' ratings, (2) fed with white-matter injury measures, and (3) fed with both input types.The 3 models respectively predicted survival at 1 year with accuracies of 70%, 73%, and 88%. The accuracy with both input types was significantly better (P < 0.05).Quantifying severity of injury to white-matter tracts complements qualitative imaging findings and improves outcome prediction in severe traumatic brain injury.