A major goal of neuroimaging studies is to develop predictive models to analyze the relationship between whole brain functional connectivity patterns and behavioural traits. However, there is no single widely-accepted standard pipeline for analyzing functional connectivity. The common procedure for designing functional connectivity based predictive models entails three main steps: parcellating the brain, estimating the interaction between defined parcels, and lastly, using these integrated associations between brain parcels as features fed to a classifier for predicting non-imaging variables e.g., behavioural traits, demographics, emotional measures, etc. There are also additional considerations when using correlation-based measures of functional connectivity, resulting in three supplementary steps: utilising Riemannian geometry tangent space parameterization to preserve the geometry of functional connectivity; penalizing the connectivity estimates with shrinkage approaches to handle challenges related to short time-series (and noisy) data; and removing confounding variables from brain-behaviour data. These six steps are contingent on each-other, and to optimise a general framework one should ideally examine these various methods simultaneously. In this paper, we investigated strengths and short-comings, both independently and jointly, of the following measures: parcellation techniques of four kinds (categorized further depending upon number of parcels), five measures of functional connectivity, the decision of staying in the ambient space of connectivity matrices or in tangent space, the choice of applying shrinkage estimators, six alternative techniques for handling confounds and finally four novel classifiers/predictors. For performance evaluation, we have selected two of the largest datasets, UK Biobank and the Human Connectome Project resting state fMRI data, and have run more than 9000 different pipeline variants on a total of ∼14000 individuals to determine the optimum pipeline. For independent performance validation, we have run some best-performing pipeline variants on ABIDE and ACPI datasets (∼1000 subjects) to evaluate the generalisability of proposed network modelling methods.
Brain activity is a dynamic combination of the responses to sensory inputs and its own spontaneous processing. Consequently, such brain activity is continuously changing whether or not one is focusing on an externally imposed task. Previously, we have introduced an analysis method that allows us, using Hidden Markov Models (HMM), to model task or rest brain activity as a dynamic sequence of distinct brain networks, overcoming many of the limitations posed by sliding window approaches. Here, we present an advance that enables the HMM to handle very large amounts of data, making possible the inference of very reproducible and interpretable dynamic brain networks in a range of different datasets, including task, rest, MEG and fMRI, with potentially thousands of subjects. We anticipate that the generation of large and publicly available datasets from initiatives such as the Human Connectome Project and UK Biobank, in combination with computational methods that can work at this scale, will bring a breakthrough in our understanding of brain function in both health and disease.
We propose the Gaussian-Linear Hidden Markov model (GLHMM), a generalisation of different types of HMMs commonly used in neuroscience. In short, the GLHMM is a general framework where linear regression is used to flexibly parameterise the Gaussian state distribution, thereby accommodating a wide range of uses -including unsupervised, encoding and decoding models. GLHMM is implemented as a Python toolbox with an emphasis on statistical testing and out-of-sample prediction -i.e. aimed at finding and characterising brain-behaviour associations. The toolbox uses a stochastic variational inference approach, enabling it to handle large data sets at reasonable computational time. Overall, the approach can be applied to several data modalities, including animal recordings or non-brain data, and applied over a broad range of experimental paradigms. For demonstration, we show examples with fMRI, electrocorticography, magnetoencephalo-graphy and pupillometry.
Our ability to hold information in mind is limited, requires a high degree of cognitive control, and is necessary for many subsequent cognitive processes. Children, in particular, are highly variable in how, trial-by-trial, they manage to recruit cognitive control in service of memory. Fronto-parietal networks, typically recruited under conditions where this cognitive control is needed, undergo protracted development. We explored, for the first time, whether dynamic changes in fronto-parietal activity could account for children's variability in tests of visual short-term memory (VSTM). We recorded oscillatory brain activity using magnetoencephalography (MEG) as 9- to 12-year-old children and adults performed a VSTM task. We combined temporal independent component analysis (ICA) with general linear modeling to test whether the strength of fronto-parietal activity correlated with VSTM performance on a trial-by-trial basis. In children, but not adults, slow frequency theta (4-7 Hz) activity within a right lateralized fronto-parietal network in anticipation of the memoranda predicted the accuracy with which those memory items were subsequently retrieved. These findings suggest that inconsistent use of anticipatory control mechanism contributes significantly to trial-to-trial variability in VSTM maintenance performance.
Neural activity contains rich spatio-temporal structure that corresponds to cognition. This includes oscillatory bursting and dynamic activity that span across networks of brain regions, all of which can occur on timescales of a tens of milliseconds. While these processes can be accessed through brain recordings and imaging, modelling them presents methodological challenges due to their fast and transient nature. Furthermore, the exact timing and duration of interesting cognitive events is often a priori unknown. Here we present the OHBA Software Library Dynamics Toolbox (osl-dynamics), a Python-based package that can identify and describe recurrent dynamics in functional neuroimaging data on timescales as fast as tens of milliseconds. At its core are machine learning generative models that are able to adapt to the data and learn the timing, as well as the spatial and spectral characteristics, of brain activity with few assumptions. osl-dynamics incorporates state-of-the-art approaches that can be, and have been, used to elucidate brain dynamics in a wide range of data types, including magneto/electroencephalography, functional magnetic resonance imaging, invasive local field potential recordings and electrocorticography. It also provides novel summary measures of brain dynamics that can be used to inform our understanding of cognition, behaviour and disease. We hope osl-dynamics will further our understanding of brain function, through its ability to enhance the modelling of fast dynamic processes.
Abstract Predicting an individual’s cognitive traits or clinical condition using brain signals is a central goal in modern neuroscience. This is commonly done using either structural aspects, or aggregated measures of brain activity that average over time. But these approaches are missing what can be the most representative aspect of these complex human features: the uniquely individual ways in which brain activity unfolds over time, that is, the dynamic nature of the brain. The reason why these dynamic patterns are not usually taken into account is that they have to be described by complex, high-dimensional models; and it is unclear how best to use information from these models for a prediction. We here propose an approach that describes dynamic functional connectivity and amplitude patterns using a Hidden Markov model (HMM) and combines it with the Fisher kernel, which can be used to predict individual traits. The Fisher kernel is constructed from the HMM in a mathematically principled manner, thereby preserving the structure of the underlying HMM. In this way, the unique, individual signatures of brain dynamics can be explicitly leveraged for prediction. We here show in fMRI data that the HMM-Fisher kernel approach is not only more accurate, but also more reliable than other methods, including ones based on time-averaged functional connectivity. This is important because reliability is critical for many practical applications, especially if we want to be able to meaningfully interpret model errors, like for the concept of brain age. In summary, our approach makes it possible to leverage information about an individual’s brain dynamics for prediction in cognitive neuroscience and personalised medicine.
The trade-off between signal-to-noise ratio (SNR) and spatial specificity governs the choice of spatial resolution in magnetic resonance imaging (MRI); diffusion-weighted (DW) MRI is no exception. Images of lower resolution have higher signal to noise ratio, but also more partial volume artifacts. We present a data-fusion approach for tackling this trade-off by combining DW MRI data acquired both at high and low spatial resolution. We combine all data into a single Bayesian model to estimate the underlying fiber patterns and diffusion parameters. The proposed model, therefore, combines the benefits of each acquisition. We show that fiber crossings at the highest spatial resolution can be inferred more robustly and accurately using such a model compared to a simpler model that operates only on high-resolution data, when both approaches are matched for acquisition time.
Neural activity contains rich spatiotemporal structure that corresponds to cognition. This includes oscillatory bursting and dynamic activity that span across networks of brain regions, all of which can occur on timescales of tens of milliseconds. While these processes can be accessed through brain recordings and imaging, modeling them presents methodological challenges due to their fast and transient nature. Furthermore, the exact timing and duration of interesting cognitive events are often a priori unknown. Here, we present the OHBA Software Library Dynamics Toolbox (osl-dynamics), a Python-based package that can identify and describe recurrent dynamics in functional neuroimaging data on timescales as fast as tens of milliseconds. At its core are machine learning generative models that are able to adapt to the data and learn the timing, as well as the spatial and spectral characteristics, of brain activity with few assumptions. osl-dynamics incorporates state-of-the-art approaches that can be, and have been, used to elucidate brain dynamics in a wide range of data types, including magneto/electroencephalography, functional magnetic resonance imaging, invasive local field potential recordings, and electrocorticography. It also provides novel summary measures of brain dynamics that can be used to inform our understanding of cognition, behavior, and disease. We hope osl-dynamics will further our understanding of brain function, through its ability to enhance the modeling of fast dynamic processes.