Human discrimination and categorization of emotions in voices: a functional Near-Infrared Spectroscopy (fNIRS) study

2019 
Variations of the vocal tone of the voice during speech production, known as prosody, provide information about the emotional state of the speaker. In recent years, functional imaging has suggested a role of both right and left inferior frontal cortices in attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Here, we investigated the suitability of functional Near-Infrared Spectroscopy (fNIRS) to study frontal lateralization of human emotion vocalization processing during explicit and implicit categorization and discrimination. Participants listened to speech-like but semantically meaningless words spoken in a neutral, angry or fearful tone and had to categorize or discriminate them based on their emotional or linguistic content. Behaviorally, participants were faster to discriminate than to categorize and they processed the linguistic content of stimuli faster than their emotional content, while an interaction between condition (emotion/word) and task (discrimination/categorization) influenced accuracy. At the brain level, we found a four-way interaction in the fNIRS signal between condition, task, emotion and channel, highlighting the involvement of the right hemisphere to process fear stimuli, and of both hemispheres to treat anger stimuli. Our results show that fNIRS is suitable to study vocal emotion evaluation in humans, fostering its application to study emotional appraisal.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    2
    Citations
    NaN
    KQI
    []