Our previous study has demonstrated the feasibility of employing non-hair-bearing electrodes to build a Steady-state Visual Evoked Potential (SSVEP)-based Brain-Computer Interface (BCI) system, relaxing technical barriers in preparation time and offering an ease-of-use apparatus. The signal quality of the SSVEPs and the resultant performance of the non-hair BCI, however, did not close upon those reported in the state-of-the-art BCI studies based on the electroencephalogram (EEG) measured from the occipital regions. Recently, advanced decoding algorithms such as task-related component analysis have made a breakthrough in enhancing the signal quality of the occipital SSVEPs and the performance of SSVEP-based BCIs in a well-controlled laboratory environment. However, it remains unclear if the advanced decoding algorithms can extract highfidelity SSVEPs from the non-hair EEG and enhance the practicality of non-hair BCIs in real-world environments. This study aims to quantitatively evaluate whether, and if so, to what extent the non-hair BCIs can leverage the state-of-art decoding algorithms. Eleven healthy individuals participated in a 5-target SSVEP BCI experiment. A high-density EEG cap recorded SSVEPs from both hair-covered and non-hair-bearing regions. By evaluating and demonstrating the accessibility of nonhair-bearing behind-ear signals, our assessment characterized constraints on data length, trial numbers, channels, and their relationships with the decoding algorithms, providing practical guidelines to optimize SSVEP-based BCI systems in real-life applications.
High-speed steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) have been developed to enable the communications between the human brain and external environments. One of the major issues in the real-world applications of SSVEP-BCIs is the laborious and time-consuming calibration process, triggering the development of transfer-learning approaches to leverage existing data from other users. A comprehensive investigation on the inter-and intra-subject variability in SSVEP data is thus needed to provide insight for designing future transfer-learning frameworks for SSVEP-BCIs. We hereby present the first study that systematically and quantitatively assesses the variability in SSVEP data, where the sources of inter-and intra-subject variability at low-and high-frequency range were identified using Fisher's discriminant ratios (FDRs). The insights gained from this work could drive the future developments of transfer-learning approaches to minimize the calibration efforts in high-speed SSVEP BCIs.
Brain-computer interfaces (BCIs) using Electroencephalography (EEG) have drawn attention to providing alternative control pathways for users with motor disabilities or even the general public in real-world environments due to their robustness, relatively low cost, and high portability. However, EEG still suffers from large variability between subjects or between sessions of an individual subject. To obtain optimal performance, a BCI usually requires a user to go through a calibration process to fine-tune the model. This calibration process is usually long and could hinder the practicality of a BCI. In this study, we propose a closed-loop framework that monitors the user EEG responses to the action of a BCI. If an Error-related Potential (ErrP) is detected in the response, it is indicated that the BCI is making a wrong prediction. By using the information from this ErrP detector, we can include online testing trials into the training pool and further fine-tune the model over the time the BCI is used. Results suggest that the proposed framework can reach better results with a few additional trials when compared to the model pre-trained from some existing data. Also, the performance of the proposed model can gradually converge to a fully calibrated model, which suggests that the conventional calibration process could be replaced by online training.
Steady-state visual evoked potential (SSVEP)-based brain computer-interfaces (BCIs) have shown its robustness in facilitating high-efficiency communication. State-of-the-art training-based SSVEP decoding methods such as extended Canonical Correlation Analysis (CCA) and Task-Related Component Analysis (TRCA) are the major players that elevate the efficiency of the SSVEP-based BCIs through a calibration process. However, due to notable human variability across individuals and within individuals over time, calibration (training) data collection is non-negligible and often laborious and time-consuming, deteriorating the practicality of SSVEP BCIs in a real-world context. This study aims to develop a cross-subject transferring approach to reduce the need for collecting training data from a test user with a newly proposed least-squares transformation (LST) method. Study results show the capability of the LST in reducing the number of training templates required for a 40-class SSVEP BCI. The LST method may lead to numerous real-world applications using near-zero-training/plug-and-play high-speed SSVEP BCIs.
Task-related component analysis (TRCA) has been the most effective spatial filtering method in implementing high-speed brain-computer interfaces (BCIs) based on steady-state visual evoked potentials (SSVEPs). TRCA is a data-driven method, in which spatial filters are optimized to maximize inter-trial covariance of time-locked electroencephalographic (EEG) data, formulated as a generalized eigenvalue problem. Although multiple eigenvectors can be obtained by TRCA, the traditional TRCA-based SSVEP detection considered only one that corresponds to the largest eigenvalue to reduce its computational cost. This study proposes using multiple eigen-vectors to classify SSVEPs. Specifically, this study integrates a task consistency test, which statistically identifies whether the component reconstructed by each eigenvector is task-related or not, with the TRCA-based SSVEP detection method. The proposed method was evaluated by using a 12-class SSVEP dataset recorded from 10 subjects. The study results indicated that the task consistency test usually identified and suggested more than one eigenvectors (i.e., spatial filters). Further, the use of additional spatial filters significantly improved the classification accuracy of the TRCA-based SSVEP detection.
This paper proposes a novel device-to-device transfer-learning algorithm for reducing the calibration cost in a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) speller by leveraging electroencephalographic (EEG) data previously acquired by different EEG systems.The transferring is done by projecting the scalp-channel EEG signals onto a shared latent domain across devices. Three spatial filtering techniques, including channel averaging, canonical correlation analysis (CCA), and task-related component analysis (TRCA), were employed to extract the shared responses from different devices. The transferred data were integrated into a template-matching-based algorithm to detect SSVEPs. To evaluate its transferability, this paper conducted two sessions of simulated online BCI experiments with ten subjects using 40 visual stimuli modulated by joint frequency-phase coding method. In each session, two different EEG devices were used: first, the Quick-30 system (Cognionics, Inc.) with dry electrodes, and second, the ActiveTwo system (BioSemi, Inc.) with wet electrodes.The proposed method with CCA- and TRCA-based spatial filters achieved significantly higher classification accuracy compared with the calibration-free standard CCA-based method.This paper validated the feasibility and effectiveness of the proposed method in implementing calibration-free SSVEP-based BCIs.The proposed method has great potentials to enhance practicability and usability of real-world SSVEP-based BCI applications by leveraging user-specific data recorded in previous sessions even with different EEG systems and montages.