Orthogonal Convolutional Neural Networks for Automatic Sleep Stage Classification based on Single-Channel EEG

2019 
Abstract Background and objective In recent years, several automatic sleep stage classification methods based on convolutional neural networks (CNN) by learning hierarchical feature representation automatically from raw EEG data have been proposed. However, the state-of-the-art of such methods are quite complex. Using a simple CNN architecture to classify sleep stages is important for portable sleep devices. In addition, employing CNNs to learn rich and diverse representations remains a challenge. Therefore, we propose a novel CNN model for sleep stage classification. Methods Generally, EEG signals are better described in the frequency domain; thus, we convert EEG data to a time–frequency representation via Hilbert–Huang transform. To learn rich and effective feature representations, we propose an orthogonal convolutional neural network (OCNN). First, we construct an orthogonal initialization of weights. Second, to avoid destroying the orthogonality of the weights in the training process, orthogonality regularizations are proposed to maintain the orthogonality of weights. Simultaneously, a squeeze-and-excitation (SE) block is employed to perform feature recalibration across different channels. Results The proposed method achieved a total classification accuracy of 88.4% and 87.6% on two public datasets, respectively. The classification performances of different convolutional neural networks models were compared to that of the proposed method. The experiment results demonstrated that the proposed method is effective for sleep stage classification. Conclusions Experiment results indicate that the proposed OCNN can learn rich and diverse feature representations from time-frequency images of EEG data, which is important for deep learning. In addition, the proposed orthogonality regularization is simple and can be easily adapted to other architectures.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    51
    References
    27
    Citations
    NaN
    KQI
    []