Learning Attentional Temporal Cues of Brainwaves with Spatial Embedding for Motion Intent Detection

2019 
As brain dynamics fluctuate considerably across different subjects, it is challenging to design effective handcrafted features based on prior knowledge. Regarding this gap, this paper proposes a Graph-based Convolutional Recurrent Attention Model (G-CRAM) to explore EEG features across different subjects for movement intention recognition. A graph structure is first developed to embed the positioning information of EEG nodes, and then a convolutional recurrent attention model learns EEG features from both spatial and temporal dimensions and adaptively emphasizes on the most distinguishable temporal periods. The proposed approach is validated on two public movement intention EEG datasets. The results show that the GCRAM achieves superior performance to state-of-the-art methods regarding recognition accuracy and ROC-AUC. Furthermore, model interpreting studies reveal the learning process of different neural network components and demonstrate that the proposed model can extract detailed features efficiently.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    0
    Citations
    NaN
    KQI
    []