Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a “Full Text” option. The original article is trackable via the “References” option.
Our previous study has demonstrated the feasibility of employing non-hair-bearing electrodes to build a Steady-state Visual Evoked Potential (SSVEP)-based Brain-Computer Interface (BCI) system, relaxing technical barriers in preparation time and offering an ease-of-use apparatus. The signal quality of the SSVEPs and the resultant performance of the non-hair BCI, however, did not close upon those reported in the state-of-the-art BCI studies based on the electroencephalogram (EEG) measured from the occipital regions. Recently, advanced decoding algorithms such as task-related component analysis have made a breakthrough in enhancing the signal quality of the occipital SSVEPs and the performance of SSVEP-based BCIs in a well-controlled laboratory environment. However, it remains unclear if the advanced decoding algorithms can extract highfidelity SSVEPs from the non-hair EEG and enhance the practicality of non-hair BCIs in real-world environments. This study aims to quantitatively evaluate whether, and if so, to what extent the non-hair BCIs can leverage the state-of-art decoding algorithms. Eleven healthy individuals participated in a 5-target SSVEP BCI experiment. A high-density EEG cap recorded SSVEPs from both hair-covered and non-hair-bearing regions. By evaluating and demonstrating the accessibility of nonhair-bearing behind-ear signals, our assessment characterized constraints on data length, trial numbers, channels, and their relationships with the decoding algorithms, providing practical guidelines to optimize SSVEP-based BCI systems in real-life applications.
This paper describes the driving control system for a powered wheelchair using voluntary eye blinks. Recently, new human-computer interfaces (HCIs) that take the place of a joystick have been developed for people with disabilities of the upper body. In this paper, voluntary eye blinks are used as an HCI. However, the problem with this HCI is that the number of input directions and operations is smaller than that of a joystick, which causes inefficient movement. Therefore, assistive systems are needed for efficient and safe wheelchair movement. The proposed system is based on environment recognition and fuzzy logic. It can detect obstacles and passages, and speed and direction are calculated automatically for obstacle avoidance and right/left turns. The systems effectiveness is demonstrated through experiments with a real HCI in a real environment.