Recognition of emotions using multimodal physiological signals and an ensemble deep learning model

2017 
An ensemble of deep classifiers is built for recognizing emotions using multimodal physiological signals.The higher-level abstractions of physiological features of each modality are separately extracted by deep hidden neurons in member stacked-autoencoders.The minimal structure of the deep model is identified according to a structural loss function for local geometrical information preservation.The physiological feature abstractions are merged via an adjacent-graph based fusion network with hierarchical layers. Background and ObjectiveUsing deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. MethodsIn this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. ResultsDEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26%. ConclusionsThe superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    169
    Citations
    NaN
    KQI
    []