Abstract White matter hyperintensities of presumed vascular origin (WMH) are associated with cognitive impairment and are a key imaging marker in evaluating brain health. However, WMH volume alone does not fully account for the extent of cognitive deficits and the mechanisms linking WMH to these deficits remain unclear. Lesion network mapping (LNM) enables us to infer if brain networks are connected to lesions and could be a promising technique for enhancing our understanding of the role of WMH in cognitive disorders. Our study employed LNM to test the following hypotheses: (i) LNM-informed markers surpass WMH volumes in predicting cognitive performance; and (ii) WMH contributing to cognitive impairment map to specific brain networks. We analysed cross-sectional data of 3485 patients from 10 memory clinic cohorts within the Meta VCI Map Consortium, using harmonized test results in four cognitive domains and WMH segmentations. WMH segmentations were registered to a standard space and mapped onto existing normative structural and functional brain connectome data. We employed LNM to quantify WMH connectivity to 480 atlas-based grey and white matter regions of interest (ROI), resulting in ROI-level structural and functional LNM scores. We compared the capacity of total and regional WMH volumes and LNM scores in predicting cognitive function using ridge regression models in a nested cross-validation. LNM scores predicted performance in three cognitive domains (attention/executive function, information processing speed, and verbal memory) significantly better than WMH volumes. LNM scores did not improve prediction for language functions. ROI-level analysis revealed that higher LNM scores, representing greater connectivity to WMH, in grey and white matter regions of the dorsal and ventral attention networks were associated with lower cognitive performance. Measures of WMH-related brain network connectivity significantly improve the prediction of current cognitive performance in memory clinic patients compared to WMH volume as a traditional imaging marker of cerebrovascular disease. This highlights the crucial role of network integrity, particularly in attention-related brain regions, improving our understanding of vascular contributions to cognitive impairment. Moving forward, refining WMH information with connectivity data could contribute to patient-tailored therapeutic interventions and facilitate the identification of subgroups at risk of cognitive disorders.
Objective: Mobile, valid, and engaging cognitive assessments are essential for detecting and tracking change in research participants and patients at risk for Alzheimer’s Disease and Related Dementias (ADRDs). The mobile cognitive app performance platform (mCAPP) includes memory and executive functioning tasks to remotely detect cognitive changes associated with aging and preclinical Alzheimer’s disease. This study assesses participants’ comfort and subjective experiences with mCAPP as the potential utility and advantage of mobile app-based assessments for remote monitoring among older adults will depend upon usability and adoptability of such technology. Participants and Methods: The mCAPP includes three gamified tasks: (1) a memory task involving learning and matching hidden card pairs (“Concentration”) (2) a stroop-like task (“Brick Drop”), and (3) a digit-symbol coding-like task (“Space Imposters”). Participants included 37 older adults (60% female; age=72±4.4; years of education=17±2.5; 67% White) with normal cognition enrolled in the Penn ADRC cohort. Participants completed one baseline session of mCAPP in-person, followed by two weeks of at-home use with eight scheduled sessions. Information on prior experience with mobile technology and games was collected, and usability of mCAPP was measured at baseline and after 2-weeks of use with the IBM Computer Usability Satisfaction Questionnaire and the mHeath App Usability Questionnaire (MAUQ) respectively. Feedback on perceived difficulty, enjoyment, and likelihood to play mCAPP games again on their own was collected. Results: Participants completed on average 11±4.9 sessions over 2 weeks, with each session lasting 11.5±2.5 minutes. 59% of participants reported using their mobile device to play games (“mobile game players”). Performance on mCAPP tasks was slower at baseline for non-players, with trend-level differences on higher-load blocks of Space Imposters (p=.057 and .059). No differences in game performance were seen between groups after playing 8 sessions at-home. There were no differences in usability of mCAPP between groups, with average usability 8.2±1.5 (IBM, 0-9 scale) at T1 and 6.2±0.8 (MAUQ, 1-7 scale) after completion of two weeks of at-home use (TLast). Reported enjoyment was moderate to high for both groups at baseline and increased over time. Likelihood to play Concentration and Brick Drop again trended lower among nonplayers at T1 (p=.061 and .054), but not at TLast. Further, change in likelihood to play mCAPP from T1 to TLast was positive among non-players, with change for Concentration significantly higher for non-players than for players (p=.037). Conclusions: Participants were willing and able to complete at-home cognitive testing and most completed more than the assigned sessions. While participants who do not play games on their own mobile device were slower on some tasks at baseline, these differences dissipated with further play at-home. Usability and enjoyment of mCAPP games were high regardless of mobile game-playing status, and non-players demonstrated increased willingness to play mCAPP games again at the end of participation compared to baseline. This pilot study shows preliminary feasibility and adoptability of mobile app-based assessment regardless of prior experience with mobile games.
Brain age (BA), distinct from chronological age (CA), can be estimated from MRIs to evaluate neuroanatomic aging in cognitively normal (CN) individuals. BA, however, is a cross-sectional measure that summarizes cumulative neuroanatomic aging since birth. Thus, it conveys poorly recent or contemporaneous aging trends, which can be better quantified by the (temporal) pace P of brain aging. Many approaches to map P , however, rely on quantifying DNA methylation in whole-blood cells, which the blood–brain barrier separates from neural brain cells. We introduce a three-dimensional convolutional neural network (3D-CNN) to estimate P noninvasively from longitudinal MRI. Our longitudinal model (LM) is trained on MRIs from 2,055 CN adults, validated in 1,304 CN adults, and further applied to an independent cohort of 104 CN adults and 140 patients with Alzheimer’s disease (AD). In its test set, the LM computes P with a mean absolute error (MAE) of 0.16 y (7% mean error). This significantly outperforms the most accurate cross-sectional model, whose MAE of 1.85 y has 83% error. By synergizing the LM with an interpretable CNN saliency approach, we map anatomic variations in regional brain aging rates that differ according to sex, decade of life, and neurocognitive status. LM estimates of P are significantly associated with changes in cognitive functioning across domains. This underscores the LM’s ability to estimate P in a way that captures the relationship between neuroanatomic and neurocognitive aging. This research complements existing strategies for AD risk assessment that estimate individuals’ rates of adverse cognitive change with age.
Longitudinal imaging data are routinely acquired for health studies and patient monitoring. A central goal in longitudinal studies is tracking relevant change over time. Traditional methods remove nuisance variation with custom pipelines to focus on significant changes. In this work, we present a machine learning–based method that automatically ignores irrelevant changes and extracts the time-varying signal of interest. Our method, called Learning-based Inference of Longitudinal imAge Changes (LILAC), performs a pairwise comparison of longitudinal images in order to make a temporal difference prediction. LILAC employs a convolutional Siamese architecture to extract feature pairs, followed by subtraction and a bias-free fully connected layer to learn meaningful temporal image differences. We first showcase LILAC’s ability to capture key longitudinal changes by simply training it to predict the temporal ordering of images. In our experiments, temporal ordering accuracy exceeded 0.98, and predicted time differences were strongly correlated with actual changes in relevant variables (Pearson Correlation Coefficient r = 0.911 with embryo phase change, and r = 0.875 with time interval in wound healing). Next, we trained LILAC to explicitly predict specific targets, such as the change in clinical scores in patients with mild cognitive impairment. LILAC models achieved over a 40% reduction in root mean square error compared to baseline methods. Our empirical results demonstrate that LILAC effectively localizes and quantifies relevant individual-level changes in longitudinal imaging data, offering valuable insights for studying temporal mechanisms or guiding clinical decisions.
Abstract Background Mobile, valid and engaging cognitive assessments are essential for detecting and tracking change in research participants and patients at risk for Alzheimer’s Disease and Related Dementias (ADRDs). This pilot study aims to determine the feasibility and generalizability of at‐home, app‐based cognitive assessments included in the mobile cognitive app performance platform (mCAPP), to detect cognitive changes associated with aging and preclinical AD. Method The mCAPP includes three gamified tasks (Figure 1): (1) a “concentration” memory task that includes learning and matching hidden card pairs with increasing memory load, pattern separation features (lure vs. non‐lure), and spatial memory (2) a stroop‐like task (“brick drop”) with speeded word and color identification and response inhibition components and (3) a digit‐symbol coding‐like task (“space imposters”) with increasing pairs and incidental learning components. Participants completed the NACC UDS3 and additional paper and pencil tests. Participants used the mCAPP at home for two weeks. Participants included sixty older adults (73% female; age = 71.9±4.6, education = 16.6±2.4; 50% White, 48% Black/African American, 2% Multiracial) without cognitive impairment enrolled in the Penn ADRC cohort. Result Participants played 12±5.1 sessions over two weeks for 11.5±2.8 min/session, with 68% playing more than the assigned sessions. Almost all participants (98%) used a smartphone and 62% played games on their phone. Usability rating was 6.3±0.8 (1‐7 scale) and most participants reported task difficulty was just right (70%‐95%). 68% reported preferring mobile device based cognitive assessment to standard in‐person cognitive batteries. All tasks showed lower performance with increasing cognitive load (p’s<.05). Age and education correlated with both mCAPP and traditional cognitive measures. Concentration performance correlated with UDS3 memory measures and the PACC overall (p’s<.05), however when examined by self‐identified racial group, relationships remained significant in White participants, but not in Black/African American participants (Figure 2). Brick drop performance correlated with the Stroop task (p<.05) and space imposters performance correlated with the Digit Symbol Substitution Test (p<.001) within all groups. Conclusion This pilot study shows app usability for at‐home use in a diverse cohort of older adults. Performance across measures indicate initial reliability and validity of mCAPP with attention needed to differences in performance across participants with diverse sociodemographic backgrounds.
Abstract Background Mobile, valid and engaging cognitive assessments are essential for detecting and tracking change in research participants and patients at risk for Alzheimer’s Disease and Related Dementias (ADRDs). This pilot study aims to determine the feasibility and generalizability of at‐home, app‐based cognitive assessments included in the mobile cognitive app performance platform (mCAPP), to detect cognitive changes associated with aging and preclinical AD. Method The mCAPP includes three gamified tasks (Figure 1): (1) a “concentration” memory task that includes learning and matching hidden card pairs with increasing memory load, pattern separation features (lure vs. non‐lure), and spatial memory (2) a stroop‐like task (“brick drop”) with speeded word and color identification and response inhibition components and (3) a digit‐symbol coding‐like task (“space imposters”) with increasing pairs and incidental learning components. Participants completed the NACC UDS3 and additional paper and pencil tests. Participants used the mCAPP at home for two weeks. Participants included sixty older adults (73% female; age = 71.9±4.6, education = 16.6±2.4; 50% White, 48% Black/African American, 2% Multiracial) without cognitive impairment enrolled in the Penn ADRC cohort. Result Participants played 12±5.1 sessions over two weeks for 11.5±2.8 min/session, with 68% playing more than the assigned sessions. Almost all participants (98%) used a smartphone and 62% played games on their phone. Usability rating was 6.3±0.8 (1‐7 scale) and most participants reported task difficulty was just right (70%‐95%). 68% reported preferring mobile device based cognitive assessment to standard in‐person cognitive batteries. All tasks showed lower performance with increasing cognitive load (p’s<.05). Age and education correlated with both mCAPP and traditional cognitive measures. Concentration performance correlated with UDS3 memory measures and the PACC overall (p’s<.05), however when examined by self‐identified racial group, relationships remained significant in White participants, but not in Black/African American participants (Figure 2). Brick drop performance correlated with the Stroop task (p<.05) and space imposters performance correlated with the Digit Symbol Substitution Test (p<.001) within all groups. Conclusion This pilot study shows app usability for at‐home use in a diverse cohort of older adults. Performance across measures indicate initial reliability and validity of mCAPP with attention needed to differences in performance across participants with diverse sociodemographic backgrounds.
Objective: Mobile, valid, and engaging cognitive assessments are essential for detecting and tracking change in research participants and patients at risk for Alzheimer’s Disease and Related Dementias (ADRDs). This pilot study aims to determine the feasibility and performance of app-based memory and executive functioning tasks included in the mobile cognitive app performance platform (mCAPP), to remotely detect cognitive changes associated with aging and preclinical Alzheimer’s Disease (AD). Participants and Methods: The mCAPP includes three gamified tasks: (1) a memory task that includes learning and matching hidden card pairs and incorporates increasing memory load, pattern separation features (lure vs. non-lure), and spatial memory (2) a stroop-like task (“brick drop”) with speeded word and color identification and response inhibition components and (3) a digit-symbol coding-like task (“space imposters”) with increasing pairs and incidental learning components. The cohort completed the NACC UDS3 neuropsychological battery, selected NIH Toolbox tasks, and additional cognitive testing sensitive to pre-clinical AD, within six months of the mCAPP testing. Participants included thirty-seven older adults (60% female; age=72±4.4, years of education=17±2.5; 67% Caucasian, 30% Black/AA, 3% Multiracial) with normal cognition who are enrolled in the Penn Alzheimer’s Disease Research Center (ADRC) cohort. Participants completed one in-person session and two weeks of at-home testing, with eight scheduled sessions, four in the morning and four in the afternoon. Participants also completed questionnaires and an interview about technology use and wore activity trackers to collect daily step and sleep data and answered questions about mood, anxiety, and fatigue throughout the two weeks of at-home data collection. Results: The participants completed an average of 11 at-home sessions, with the majority choosing to play extra sessions. Participants reported high usability ratings for all tasks and the majority rated the task difficulty as acceptable. On all mCAPP tasks, participant performance declined in accuracy and speed with increasing memory load and task complexity. mCAPP tasks correlated significantly with paper and pencil measures and several NIH Toolbox tasks (p<0.05). Examination of performance trends over multiple sessions indicates stabilization of performance within 4-6 sessions on memory mCAPP measures and 5-7 sessions on executive functioning mCAPP measures. Preliminary analyses indicate differences in mCAPP measures and imaging biomarkers. Conclusions: Participants were willing and able to complete at-home cognitive testing and most chose to complete more than the assigned sessions. Remote data collection is feasible and well-tolerated. We show preliminary construct validity with the UDS3 and NIH Toolbox and test-retest reliability following a period of task learning and performance improvement and stabilization. This work will help to advance remote detection and monitoring of early cognitive changes associated with preclinical AD. Future directions will include further evaluation of the relationships between mCAPP performance, behavioral states, and neuroimaging biomarkers as well as the utility of detection of practice effects in identifying longitudinal change and risk for ADRD-related cognitive decline.
Abstract Alzheimer’s disease typically progresses in stages, which have been defined by the presence of disease-specific biomarkers: amyloid (A), tau (T) and neurodegeneration (N). This progression of biomarkers has been condensed into the ATN framework, in which each of the biomarkers can be either positive (+) or negative (−). Over the past decades, genome-wide association studies have implicated ∼90 different loci involved with the development of late-onset Alzheimer’s disease. Here, we investigate whether genetic risk for Alzheimer’s disease contributes equally to the progression in different disease stages or whether it exhibits a stage-dependent effect. Amyloid (A) and tau (T) status was defined using a combination of available PET and CSF biomarkers in the Alzheimer’s Disease Neuroimaging Initiative cohort. In 312 participants with biomarker-confirmed A−T− status, we used Cox proportional hazards models to estimate the contribution of APOE and polygenic risk scores (beyond APOE) to convert to A+T− status (65 conversions). Furthermore, we repeated the analysis in 290 participants with A+T− status and investigated the genetic contribution to conversion to A+T+ (45 conversions). Both survival analyses were adjusted for age, sex and years of education. For progression from A−T− to A+T−, APOE-e4 burden showed a significant effect [hazard ratio (HR) = 2.88; 95% confidence interval (CI): 1.70–4.89; P < 0.001], whereas polygenic risk did not (HR = 1.09; 95% CI: 0.84–1.42; P = 0.53). Conversely, for the transition from A+T− to A+T+, the contribution of APOE-e4 burden was reduced (HR = 1.62; 95% CI: 1.05–2.51; P = 0.031), whereas the polygenic risk showed an increased contribution (HR = 1.73; 95% CI: 1.27–2.36; P < 0.001). The marginal APOE effect was driven by e4 homozygotes (HR = 2.58; 95% CI: 1.05–6.35; P = 0.039) as opposed to e4 heterozygotes (HR = 1.74; 95% CI: 0.87–3.49; P = 0.12). The genetic risk for late-onset Alzheimer’s disease unfolds in a disease stage-dependent fashion. A better understanding of the interplay between disease stage and genetic risk can lead to a more mechanistic understanding of the transition between ATN stages and a better understanding of the molecular processes leading to Alzheimer’s disease, in addition to opening therapeutic windows for targeted interventions.
The mobile cognitive app performance platform (mCAPP), an app-based cognitive assessment, includes memory and executive functioning tasks to remotely detect cognitive changes associated with aging and preclinical Alzheimer's disease. This study examines the relationship between prior experience and comfort with mobile technology and subjective experiences with mCAPP.60 older adults (73% female; age = 74 ± 4.8; education = 17 ± 2.4 years; 48% Black/African American) with normal cognition enrolled in the Penn Alzheimer's Disease Research Center cohort completed one baseline session and two weeks of at-home mCAPP use. This study included measures of prior experience with mobile technology and games, at-home mCAPP performance and usage levels, and feedback on mCAPP usability.62% of participants reported using mobile devices to play games ("game-players"), and they did not differ from non-users in age or global cognitive status. Game-players self-reported significantly higher proficiency with specific mobile technology features (p = 0.028), but not perceived independence or confidence with technology. mCAPP performance differences were present at baseline but not by the 8th at-home session. Usability and enjoyment of mCAPP were high and increased for both groups. Non-players reported lower likelihood to play mCAPP games at baseline (p < 0.05), but in practice increased play frequency throughout at-home use and reported higher likelihood to play mCAPP games afterwards (p ≤ 0.001).Participants with varying mobile game experience-levels were willing and able to use mCAPP at-home. Both groups found mCAPP easy and enjoyable to use, and non-players particularly showed increased adoption of mCAPP. This pilot study shows preliminary feasibility of mobile app-based assessment regardless of prior experience with mobile games.
This study assesses the sensitivity of the mobile cognitive app performance platform (mCAPP), a mobile and engaging cognitive assessment tool, to participant reported fatigue.The mCAPP includes three gamified tasks: a memory task ("Concentration"), a stroop-like task ("Brick Drop"), and a digit-symbol coding-like task ("Space Imposters"). For all games, shorter reaction times and fewer guesses indicate better performance. The cohort included 55 participants (72.73% female; age = 71.60 ± 4.48; education = 16.71 ± 2.30; 49.1% white; 49.1% Black/African American, 1.8% Multiracial) without cognitive impairment who are enrolled in the Penn ADRC cohort. Performance was analyzed as a whole and grouped into days of high (7+) and low (0-3) fatigue (range 0-10).The average fatigue rating was 2.61 ± 2.51. Overall, higher reported fatigue was weakly correlated with more time spent (ρ = 0.22) and a higher number of guesses on Concentration (ρ = 0.12; p-values<0.01). There was a significant difference in speed for those with high fatigue (M = 2.825) and low fatigue (M = 2.592; p = 0.018) on Space Imposters, but not on Brick Drop (p = 0.15). On Concentration, those with high fatigue needed a higher number of guesses (M = 5.356) compared to low fatigue (M = 5.095; p = 0.003) and more time was spent on individual guesses for those with high fatigue (M = 17.587) compared to low fatigue (M = 13.357; p < 0.001).The mCAPP can remotely detect differences in cognitive performance in self-reported high and low fatigue states. Future studies will include looking at sleep data to determine objective measures of fatigue-related behavior and in-depth analysis of performance within-subject.