Canonical correlation analysis (CCA) has been proved to be effective in the detection of steady state visual evoked potential (SSVEP) signals. However, the CCA method only chooses the frequency in the reference mode that corresponds to the maximum correlation value as the target. This may make the CCA output less robust. In this study, we propose a one-class support vector machine based filter to filter the sequences of correlation values in the process of the detection of SSVEP signals. The results demonstrate that the classification accuracy improved over different time windows for all subjects and the improvement achieved approximately 10% for some subjects. Moreover, the ratio of instructions that were filtered incorrectly was relative low (less than 5%) if the SSVEP signals were generated effectively.
Traditional electroencephalographic (EEG)-based steady-state visual evoked potentials (SSVEPs) have been studied for decades. However, classification accuracy is still limited. One reason for this is the existence of false positives when subjects are not focusing on a stimulus or target. In this study, we used fNIRS signals to identify the time at which subject really focusing on the stimulus and shift the start of an epoch which will be sent for an EEG classifier. An offline SSVEP experiment results demonstrated that this method is feasible.
The canonical correlation analysis (CCA), double-partial least-squares (DPLS) methods and least absolute shrinkage and selection operator (LASSO) have been proven effectively in detecting the steady-state visual evoked potential (SSVEP) in SSVEP-based brain-computer interface systems. However, the accuracy of SSVEP classification can be affected by phase shifts of the electroencephalography data, so we explored the possibility of improving SSEVP detection using these methods at different phase shifts. After calculating the accuracy at different phases, we found that the phase shifts could affect the accuracy of SSVEP classification, the classification accuracy could improved about 1.1% mostly using the CCA method, meanwhile the comparison of the three methods was made at the same time and some differences between the CCA, DPLS and LASSO methods at the different phase shifts also be found. The results indicated that on the one hand, the accuracy of SSVEP detection was improved with the change of the phase, but on the other hand, although the three methods could obtain high classification accuracy, the DPLS and LASSO method showed larger fluctuations than the CCA method as the phase of the electroencephalography data of each participant or their average changed.
We developed a highly accurate, few-channel, bimodal electroencephalograph (EEG) and near-infrared spectroscopy (NIRS) brain-computer interface (BCI) system by developing new methods for signal processing and feature extraction. For data processing, we performed source analysis of EEG and NIRS signals to select the best channels from which to build a few-channel system. For EEG feature extraction, we used phase space reconstruction to convert EEG few-channel signals into multichannel signals, facilitating the extraction of EEG features by common spatial pattern. The Hurst exponent of the selected 10 channels constituted the extracted NIRS data feature. For pattern classification, we fused EEG and NIRS features together and used the support vector machine classification method. The average accuracy of bimodal EEG-NIRS was significantly higher than that of either EEG or NIRS as unimodal techniques.
Classic coherence analysis has been commonly used as a effective method for the analysis of stationary signals. To study the instantaneous coherence between non-stationary signals, we extended the concept of coherence to time-varying coherence using some time-frequency analysis methods. Wavelet-based coherence is one of the most widely used time-varying coherence methods, but few researchers have applied Hilbert-Huang transform (HHT) to coherence analysis, which also has excellent characteristics of time-frequency analysis. Therefore, this paper proposed the concept of HHT coherence, derived its method based on wavelet coherence and verified its feasibility. Then, we compared wavelet coherence and HHT coherence from three different aspects: the time-frequency resolution, effects of noise and adaptivity. The results of different simulating signals demonstrated that HHT coherence had higher time resolution, frequency resolution and more adaptivity than wavelet coherence under ideal conditions. However, due to its imperfect algorithm, the time-frequency resolution of HHT coherence was reduced by the effect of mode mixing, boundary distortion and noise. By contrast, wavelet coherence is more stable.
When people observe the actions of others, they always try to understand the intentions underlying the actions. The neural mechanism of this understanding is referred to as the mirror neuron system (MNS). Different actions may correspond to different intentions, and the activation of the MNS in the human brain may also be slightly different. The present study distinguishes these differences according to functional brain imaging signals analyzed with machine learning. Brain signals were detected when the participants observed two types of actions: (1) grasping a cup for drinking, and (2) no meaningful contact. A synchronous measurement method for EEG and NIRS was adopted to increase the information contained in the brain signals. In order to obtain better classification accuracies, the method of functional brain networks was used. This method can be used to examine the relationship between brain regions. First, phase synchronization and Pearson correlation were used to calculate correlations for EEG channels and NIRS channels, respectively. Next, correlation matrices were converted into binary matrices, and the local properties of the networks were obtained. Finally, the feature vectors for the classifier were selected by analysis of their significance. In addition, EEG data and NIRS data were combined at the feature level and better classification results were obtained.
Eye tracking technology has become increasingly important in scientific research and practical applications. In the field of eye tracking research, analysis of eye movement data is crucial, particularly for classifying raw eye movement data into eye movement events. Current classification methods exhibit considerable variation in adaptability across different participants, and it is necessary to address the issues of class imbalance and data scarcity in eye movement classification. In the current study, we introduce a novel eye movement classification method based on cascade forest (EMCCF), which comprises two modules: (1) a feature extraction module that employs a multi-scale time window method to extract features from raw eye movement data; (2) a classification module that innovatively employs a layered ensemble architecture, integrating the cascade forest structure with ensemble learning principles, specifically for eye movement classification. Consequently, EMCCF not only enhanced the accuracy and efficiency of eye movement classification but also represents an advancement in applying ensemble learning techniques within this domain. Furthermore, experimental results indicated that EMCCF outperformed existing deep learning-based classification models in several metrics and demonstrated robust performance across different datasets and participants.
Brain–computer interfaces (BCIs) system designed using the steady-state visual evoked potential (SSVEP) signal have been widely studied because of their high accuracy of classification and high rates of the information transfer. However, the SSVEP is typically measured over the occipital scalp region (channels O1, O2, and Oz), which makes this type of BCI unsuitable for some patients. We investigated the classification accuracy of SSVEP over the whole scalp, to evaluate the feasibility of building SSVEP-based BCIs that use additional channels. The classification accuracy distribution of the whole scalp increased with the electrode positions closer to the occipital region, and the classification accuracy increased with an increasing number of electroencephalogram data channels.
Understanding the actions of other people is a key component of social interaction. This paper used an electroencephalography and functional near infrared spectroscopy (EEG-fNIRS) bimodal system to investigate the temporal-spatial features of action intention understanding. We measured brain activation while participants observed three actions: 1) grasping a cup for drinking; 2) grasping a cup for moving; and 3) no meaningful intention. Analysis of EEG maximum standardized current density revealed that brain activation transitioned from the left to the right hemisphere. EEG-fNIRS source analysis results revealed that both the mirror neuron system and theory of mind network are involved in action intention understanding, and the extent to which these two systems are engaged appears to be determined by the clarity of the observed intention. These findings indicate that action intention understanding is a complex and dynamic process.
Although the canonical correlation analysis (CCA) algorithm has been applied successfully to SSVEP detection, artifacts and unrelated brain activities may influence the performance of the steady state visual evoked potential (SSVEP) based brain-computer interfaces (BCI) system. Extracting the characteristic frequency sub-bands is an effective method to enhance the signal-to-noise-ratio of SSVEP signals. The sinusoid-assisted MEMD (SA-MEMD) algorithm is a powerful method for spectral decomposition. In this study, we propose an SA-MEMD based CCA method for SSVEP detection. The results suggest that the SA-MEMD based CCA algorithm is a useful method in the detection of typical SSVEP signals. The classification accuracy achieved 88.3% in a 4 s time window and there was a 2.8% improvement compared with the standard CCA algorithm.