Human amygdala tracks a feature-based valence signal embedded within the facial expression of surprise

2017 
Human amygdala function has been traditionally associated with processing the affective valence (positive versus negative) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (a) more general emotional arousal (more relevant versus less relevant) or (b) more specific emotion categories (fear versus happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally co-vary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present functional magnetic resonance imaging data from both sexes showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions, since exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal . Furthermore, a machine learning classifier identified particular visual features of the mouth region that predicted this valence effect, isolating the specific visual signal that might be driving the neural valence response.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    13
    Citations
    NaN
    KQI
    []