Speech Emotion Recognition Considering Nonverbal Vocalization in Affective Conversations

2021 
In real-life communication, nonverbal vocalization such as laughter, cries or other emotion interjections, within an utterance play an important role for emotion expression. In previous studies, only few emotion recognition systems consider nonverbal vocalization, which naturally exists in our daily conversation. In this work, both verbal and nonverbal sounds within an utterance are considered for emotion recognition of real-life affective conversations. Firstly, a support vector machine (SVM)-based verbal and nonverbal sound detector is developed. A prosodic phrase auto-tagger is further employed to extract the verbal/nonverbal sound segments. For each segment, the emotion and sound feature embeddings are respectively extracted using the deep residual networks (ResNets). Finally, a sequence of the extracted feature embeddings for the entire dialog turn are fed to an attentive long short-term memory (LSTM)-based sequence-to-sequence model to output an emotional sequence as recognition result. The NNIME corpus (The NTHU-NTUA Chinese interactive multimodal emotion corpus), which consists of verbal and nonverbal sounds, was adopted for system training and testing. 4766 single speaker dialogue turns in the audio data of the NNIME corpus were selected for evaluation. The experimental results showed that nonverbal vocalization was helpful for speech emotion recognition. For comparison, the proposed method based on decision-level fusion achieved an accuracy of 61.92% for speech emotion recognition outperforming the traditional methods as well as the feature-level and model-level fusion approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    3
    Citations
    NaN
    KQI
    []