Learning affective representations based on magnitude and dynamic relative phase information for speech emotion recognition
2022
The complete acoustic features include magnitude and phase information. However, traditional speech emotion recognition methods only focus on the magnitude information and ignore the phase data, and will inevitably miss some information. This study explores the accurate extraction and effective use of phase features for speech emotion recognition. First, the reflection of speech emotion in the phase spectrum is analyzed, and a quantitative analysis shows that phase data contain information that can be used to distinguish emotions. A dynamic relative phase (DRP) feature extraction method is then proposed to solve the problem in which the original relative phase (RP) has difficulty determining the base frequency and further alleviating the dependence of the phase on the clipping position of the frame. Finally, a single-channel model (SCM) and a multi-channel model with an attention mechanism (MCMA) are constructed to effectively integrate the phase and magnitude information. By introducing phase information, more complete acoustic features are captured, which enriches the emotional representations. The experiments were conducted using the Emo-DB and IEMOCAP databases. Experimental results demonstrate the effectiveness of the proposed DRP for speech emotion recognition, as well as the complementarity between the phase and magnitude information in speech emotion recognition.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI