Real Time fMRI - Avoiding Drift Using Answer Blocks

2010 
Introduction: For many applications of fMRI brain computer interfaces (BCI), a digital response (yes/no) is the required output (1). Recent developments in statistical classification techniques are ideally suited to this application, as they are designed to output a class label (2). Attempts to use these techniques to classify scans on a timepoint-by-timepoint basis run into problems of drift in the classifier predictions (fig. 1), which are difficult to detrend in real time (3). To circumvent this problem, we introduce the concept of an ‘answer block’, designed to compare classifier or ROI values between rest and task, allowing superior performance, probabilistic outputs and minimization of training time. Methods: Data Acquisition 16 healthy volunteers were scanned in a 3T Bruker system using echo-planar functional images (TR=1.1s, TE=27.5ms, 21 interleaved axial slices, 4mm thickness, 1mm slice-gap, 200mm FOV, 64x64 matrix). Subjects were asked to follow a task 1 / task 2 / rest paradigm for 30s blocks of each, with a selection of 4 imagery tasks: playing tennis, navigating around their house, visualizing faces and singing ‘jingle bells’ whilst remaining motionless. Between 7 and 10 blocks of each type were acquired from each subject. State Prediction The datasets were split into ‘training’ and ‘test’ sets, with the training set comprised of two full blocks of tasks and rest. Classification of a test block was made by comparing classifier prediction scores for the second half of the answer block to the first half, when subjects had been asked to change from task to rest or rest to task at the same time. If scores were higher in the first half a ‘1’ was predicted, else a ‘-1’. This also allows a probabilistic response by using a t-test to compare means in the first and second halves of the block. For comparison with a more traditional interpretation, a block of the same overall length was taken with subjects maintaining the same state throughout (task or rest), and if the average classifier of scans in that block was greater than 0, a ‘1’ was predicted, else ‘-1’. For ROI results, a similar scheme was employed, with average intensity values over a predetermined ROI (taken from a block design analysis of this dataset (4)) used, and a decision boundary constructed from the training data. Mathematically, answer blocks are motivated by considering that a linear classifier predicts via: prediction = w.x + b; where x is the vector of voxel intensities, and w a ‘weight vector’. The vector x of voxel intensities is composed of components relating to baseline, task activation, drift and random noise. Assuming drift is approximately equal between two adjacent blocks (reasonable if drift is low frequency) subtraction should just leave the effects of task activation and random noise, circumventing the problems of drift. Similar is also true for the ROI averaging case. Results: Using an answer block improved classification of our dataset from 53% to 84% when using a statistical classifier (Support Vector Machines, (5)) and from 53% to 64% when using simple averaging over an ROI. Discussion: Using answer blocks rather than individual classification of scans or blocks allows an experimenter to ignore effects of drift or transition scans. This may allow a reduction in length of training, as smaller training sets produce more drift, which is less of a problem when using answer blocks. We speculate that this technique could be extended to allow transmission of extra information by gaining temporal resolution of blocks, by comparing at which stage task is most significantly greater than rest. This idea will be further investigated. The ability to assign probabilistic values to classifications is of great benefit. For impaired subjects, such as minimally conscious patients, results are likely to be less clear than healthy volunteers and calculation of the degree of statistical confidence in the response is essential. Answer blocks may still be affected by drift if the magnitude of drift over a task / rest block are greater than changes due to the BOLD response. It is also still preferable to detrend drift on the training set where possible, to avoid the classifier ‘learning’ the drift to the detriment of the task / rest contrast. Whether this is realistic in cases where drift is non-linear and unpredictable remains to be discovered. Conclusions: Answer blocks improve performance of classification of fMRI into task / rest states using either ROI or machine learning techniques, whilst also giving probabilistic values for block classification. Acknowledgements: We thank the MRC and the Henry Smith Charity for funding this research. References: 1. Owen A et. al., Science 2006; 1402 2. LaConte SM et.al., Hum Brain Mapp. 2007; 1033-1044 3. Williams GB Proc. ISMRM 2009; 1717 4. Boly M et. al. Neuroimage 2007; 979-992 5. Vapnik VN, Springer-Verlag 1982. 399pp Fig. 1: Example plot of classifier scores (dark line) showing drift in results over time compared to expected values.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []