Bi-modal Emotion Recognition Based on Vision-CSI

2021 
Emotional intelligence is the key to making machines more human-like. Current researches on emotion recognition usually focus on a single modality (e.g., facial expressions, speech), while human emotional expression is inherently multi-modal. Therefore, emotion recognition based on unimodal alone may not capture an individual's true deep emotion. In this paper, we present a bi-modal emotion recognition system based on two tightly-coupled and emotion-rich modalities, facial expressions and body gestures, for emotion recognition. Different from current mainstream approaches using contact or invasive physiological sensors to capture gestures, which can cause physiological discomfort to users and cannot achieve passive recognition. We explored a solution for the non-contact, non-invasive gesture, and facial expression acquisition by utilizing the commercial WiFi device and camera device, respectively. To evaluate our solution, we built a vision-CSI bi-modal emotion dataset (VCED) with data sourced from 10 volunteers and contain 1750 samples. We propose a bi-modal emotion recognition system and evaluate the system on the VCED dataset. The experimental results show the effectiveness of our scheme.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    0
    Citations
    NaN
    KQI
    []