Endovascular coiling (EC) is a vital procedure that treats intracranial aneurysms (IA) but a common complication is aneurysm recurrence as a result of coil compaction, when the implanted coil fails to isolate IA from cerebrovascular circulation. Such an event may lead to devastating hemorrhages. Hence, frequent follow-up imaging sessions using Digital Subtraction Angiography (DSA) are critical. However, DSA is invasive, expensive and not widely available. Recently, it has been shown that skull X-rays could be used as a proxy. In this work, we present a new pipeline that enables the semi-automatic evaluation of coil compaction based on X-ray images. Our pipeline involves coil segmentation with GrabCut and an autoencoder that learns image embeddings with a location-sensitive loss function. This approach generates efficient representations without training on image labels. We show that the image embeddings capture information relevant to coil compaction and that a simple distance measure between embeddings outperforms other baseline methods including a Siamese network.
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder. It is one of the leading sources of morbidity and mortality in the aging population AD cardinal symptoms include memory and executive function impairment that profoundly alters a patient’s ability to perform activities of daily living. People with mild cognitive impairment (MCI) exhibit many of the early clinical symptoms of patients with AD and have a high chance of converting to AD in their lifetime. Diagnostic criteria rely on clinical assessment and brain magnetic resonance imaging (MRI). Many groups are working to help automate this process to improve the clinical workflow. Current computational approaches are focused on predicting whether or not a subject with MCI will convert to AD in the future. To our knowledge, limited attention has been given to the development of automated computer-assisted diagnosis (CAD) systems able to provide an AD conversion diagnosis in MCI patient cohorts followed longitudinally. This is important as these CAD systems could be used by primary care providers to monitor patients with MCI. The method outlined in this paper addresses this gap and presents a computationally efficient pre-processing and prediction pipeline, and is designed for recognizing patterns associated with AD conversion. We propose a new approach that leverages longitudinal data that can be easily acquired in a clinical setting (e.g., T1-weighted magnetic resonance images, cognitive tests, and demographic information) to identify the AD conversion point in MCI subjects with AUC = 84.7. In contrast, cognitive tests and demographics alone achieved AUC = 80.6, a statistically significant difference ( n = 669, p < 0.05). We designed a convolutional neural network that is computationally efficient and requires only linear registration between imaging time points. The model architecture combines Attention and Inception architectures while utilizing both cross-sectional and longitudinal imaging and clinical information. Additionally, the top brain regions and clinical features that drove the model’s decision were investigated. These included the thalamus, caudate, planum temporale, and the Rey Auditory Verbal Learning Test. We believe our method could be easily translated into the healthcare setting as an objective AD diagnostic tool for patients with MCI.
Alzheimer's disease (AD) is the most common neurodegenerative disorder worldwide and is one of the leading sources of morbidity and mortality in the aging population. There is a long preclinical period followed by mild cognitive impairment (MCI). Clinical diagnosis and the rate of decline is variable. Progression monitoring remains a challenge in AD, and it is imperative to create better tools to quantify this progression. Brain magnetic resonance imaging (MRI) is commonly used for patient assessment. However, current approaches for analysis require strong a priori assumptions about regions of interest used and complex preprocessing pipelines including computationally expensive non-linear registrations and iterative surface deformations. These preprocessing steps are composed of many stacked processing layers. Any error or bias in an upstream layer will be propagated throughout the pipeline. Failures or biases in the non-linear subject registration and the subjective choice of atlases of specific regions are common in medical neuroimaging analysis and may hinder the translation of many approaches to the clinical practice. Here we propose a data-driven method based on an extension of a deep learning architecture, DeepSymNet, that identifies longitudinal changes without relying on prior brain regions of interest, an atlas, or non-linear registration steps. Our approach is trained end-to-end and learns how a patient's brain structure dynamically changes between two-time points directly from the raw voxels. We compare our approach with Freesurfer longitudinal pipelines and voxel-based methods using the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model can identify AD progression with comparable results to existing Freesurfer longitudinal pipelines without the need of predefined regions of interest, non-rigid registration algorithms, or iterative surface deformation at a fraction of the processing time. When compared to other voxel-based methods which share some of the same benefits, our model showed a statistically significant performance improvement. Additionally, we show that our model can differentiate between healthy subjects and patients with MCI. The model's decision was investigated using the epsilon layer-wise propagation algorithm. We found that the predictions were driven by the pallidum, putamen, and the superior temporal gyrus. Our novel longitudinal based, deep learning approach has the potential to diagnose patients earlier and enable new computational tools to monitor neurodegeneration in clinical practice.
In this paper, we studied the association between the change of structural brain volumes to the potential development of Alzheimer's disease (AD). Using a simple abstraction technique, we converted regional cortical and subcortical volume differences over two time points for each study subject into a graph. We then obtained substructures of interest using a graph decomposition algorithm in order to extract pivotal nodes via multi-view feature selection. Intensive experiments using robust classification frameworks were conducted to evaluate the performance of using the brain substructures obtained under different thresholds. The results indicated that compact substructures acquired by examining the differences between patient groups were sufficient to discriminate between AD and healthy controls with an area under the receiver operating curve of 0.72.
Abstract Alzheimer’s disease (AD) varies a great deal cognitively regarding symptoms, test findings, the rate of progression, and neuroradiologically in terms of atrophy on magnetic resonance imaging (MRI). We hypothesized that an unbiased analysis of the progression of AD, regarding clinical and MRI features, will reveal a number of AD phenotypes. Our objective is to develop and use a computational method for multi-modal analysis of changes in cognitive scores and MRI volumes to test for there being multiple AD phenotypes. In this retrospective cohort study with a total of 857 subjects from the AD (n = 213), MCI (n = 322), and control (CN, n = 322) groups, we used structural MRI data and neuropsychological assessments to develop a novel computational phenotyping method that groups brain regions from MRI and subsets of neuropsychological assessments in a non-biased fashion. The phenotyping method was built based on coupled nonnegative matrix factorization (C-NMF). As a result, the computational phenotyping method found four phenotypes with different combination and progression of neuropsychologic and neuroradiologic features. Identifying distinct AD phenotypes here could help explain why only a subset of AD patients typically respond to any single treatment. This, in turn, will help us target treatments more specifically to certain responsive phenotypes.