Video-based monitoring for fMRI of uncontrolled motor tasks

2009 
Functional MRI of recovering stroke patients requires careful behavioural monitoring, particularly when sensorimotor deficits are present (e.g. muscle weakness, muscle co-contraction, as well as delayed initiation, execution, or completion of movement). Characterizing such impairments is necessary to image and interpret the associated neural signals appropriately, and should also encompass other unintended motions that are particularly problematic in stroke patients compared to healthy adults (e.g. task correlated head motion [1]). A system to capture these complex kinematics would help substantially to refine individualized fMRI analysis of stroke patients. Therefore, we have constructed a video-based monitoring system comprising MRI-safe cameras and infrared lighting, and flexible video capture software with synchronization and real-time motion tracking capability. To demonstrate the utility of this system, we present a preliminary experiment involving fMRI of uncontrolled mouth movements, relevant to studies of speech recovery after stroke. Methods Hardware. Two MRI-safe cameras (MRC Systems, Heidelberg, Germany) were used in this experiment (Fig. 1), with good sensitivity in the visible and infrared spectrum. Supplementary lighting was provided by clusters of infrared emitting diodes (IREDs) to enable video capture without interfering with visual stimulus presentation in fMRI experiments, if required. Cameras and lights were mounted directly and flexibly on the inside wall of the scanner bore with a custom plastic swivel mount and adhesive, reclosable fasteners. An optical-TTL convertor synchronized the video capture computer with the scanner's optical triggers. Imaging was performed using a 3 Tesla MRI scanner (Magnetom Tim Trio, revision VB15A, Siemens, Erlangen, Germany) and its standard 12-channel matrix head coil. After a routine anatomical reference scan, a single 470 s BOLD fMRI time series was acquired using gradient-echo echo planar imaging (EPI) (TR/TE/FA = 2000 ms/30 ms/70o, 64x64 matrix, 200 mm FoV, 28 interleaved slices, 5 mm thick). Software. We designed a user friendly Windows-based video capture software (Fig. 2) to accept multiple simultaneous video streams and to timestamp video frames synchronous with fMRI acquisition (difficult capabilities to achieve with commonly available software). The software can optionally track multiple passive reflective markers in three dimensions, based on open-source components (Open Computer Vision Library 1.0, http://opencvlibrary.sourceforge.net/). Tracking only requires a one-time characterization of each camera and a brief (<1 min.) calibration with the cameras in final tracking configuration. Preliminary tests using two lowquality web cameras yielded a 3D tracking error of 0.9 mm. Task. One healthy young male participant, with ethical approval and informed consent, was asked to open and close his mouth approximately 20 times during fMRI. These movements were self-paced and uncontrolled in duration, a difficult scenario to analyze without monitoring. Analysis. A rater (FT) determined the timing of mouth movements and noted the corresponding fMRI-synchronized timestamps on video playback. The fMRI data were analyzed using AFNI [2]. Preprocessing included correction for physiological effects using RETROICOR, temporal interpolation for slice-time correction, coregistration to correct for head motion, and spatial smoothing with a 5 mm FWHM Gaussian kernel. Linear regression was performed with a mouth open (1) versus closed (0) boxcar waveform convolved with a gamma function, as well as a third order polynomial and six motion estimate parameters in the baseline model. The resulting statistical t-map was thresholded using a False Discovery Rate method [3] at q=0.0001. Results The participant opened his mouth 17 times. Twelve mouth movements were large/long, and the rest were smaller but still visible. The lighting and camera angle of just one video stream was adequate for analysis in this case. As expected, there was strong activation of sensorimotor-related areas (Fig. 3), including medial frontal (supplementary motor area) and lateral areas of the preand post-central gyri (primary motor and sensory areas). Some cerebellar, insula, and middle frontal activation was also present, which may be linked to planning for motor tasks. Head motion was below 1 mm and artifacts were not obvious, although there was a correlation between the mouth movement waveform and the head motion estimates. Discussion A video capture system has been developed to capture movements during fMRI experiments and to improve fMRI analysis, with assessment of stroke patients in mind. The example given (mouth movement) highlights how time-synchronized video capture has significant potential to characterize brain activity associated with movements that are not specifically prescribed, as may occur in stroke patients. The system's capability to handle simultaneous video streams from multiple viewing angles and to perform motion tracking will aid more detailed analysis of complex movements. For example, an experiment with passive markers to track bulk head motion as well as jaw movements is planned. Other options include assessment of manual reaching, grasping, and tool manipulation. In addition, real-time scan plane correction for head motion is also planned using this relatively inexpensive system. 1. Seto, et al. NeuroImage. 14, 284-297 (2001). ● 2. Cox. Comp. & Biomed. Res. 29, 162-173 (1996). ● 3. Genovese, et al. NeuroImage. 15, 870-878 (2002).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []