How Do Static and Dynamic Emotional Faces Prime Incremental Semantic Interpretation?: Comparing Older and Younger Adults

2014 
How Do Static and Dynamic Emotional Faces Prime Incremental Semantic Interpretation?: Comparing Older and Younger Adults Katja Munster (Katja.Muenster@uni-bielefeld.de) 2,3 Maria Nella Carminati (mcarmina@techfak.uni-bielefeld.de) 1,3 Pia Knoeferle (knoeferl@cit-ec.uni-bielefeld.de) 1,2,3 1 SFB 673 “Alignment in Communication” 2 Cognitive Interaction Technology Excellence Center 3 Department of Linguistics CITEC, Inspiration 1, Bielefeld University 33615 Bielefeld, Germany Abstract Using eye-tracking, two studies investigated whether a dynamic vs. static emotional facial expression can influence how a listener interprets a subsequent emotionally-valenced utterance in relation to a visual context. Crucially, we assessed whether such facial priming changes with the comprehender’s age (younger vs. older adults). Participants inspected a static (Experiment 1, Carminati & Knoeferle, 2013) or a dynamic (Experiment 2) facial expression that was either happy or sad. After inspecting the face, participants saw two pictures of opposite valence (positive and negative; presented at the same time) and heard an either positively or negatively valenced sentence describing one of these two pictures. Participants’ task was to look at the display, understand the sentence, and to decide whether the facial expression matched the sentence. The emotional face influenced visual attention on the pictures and during the processing of the sentence, and these influences were modulated by age. Older adults were more strongly influenced by the positive prime face whereas younger adults were more strongly influenced by the negative facial expression. These results suggest that the negativity and the positivity bias observed in visual attention in young and older adults respectively extend to face-sentence priming. However, static and dynamic emotional faces had similar priming effects on sentence processing. Keywords: Eye-tracking; sentence processing; emotional priming; dynamic vs. static facial expressions Introduction Monitoring people’s gaze in a visual context provides a unique opportunity for examining the incremental integration of visual and linguistic information (Tanenhaus et al., 1995). Non-linguistic visual information can rapidly guide visual attention during incremental language processing in young adults (e.g., Chambers, Tanenhaus, & Magnuson, 2004; Knoeferle et al., 2005; Sedivy et al., 1999; Spivey et al., 2002). Similar incremental effects of visual context information emerged in event-related brain potentials (ERPs) for older adults (e.g., Wassenaar & Hagoort, 2007). However, the bulk of research has focused on assessing how object- and action-related information in the visual context influences spoken language comprehension. By contrast, we know little about how social and visual cues of a speaker in the visual context (e.g., through his/her dynamic emotional facial expression) can affect a listener’s utterance comprehension 1 . In principle, a speaker’s facial expression of emotion could help a listener to rapidly interpret his/her utterances. With a view to investigating sentence processing across the lifespan and in relation to emotional visual cues, we assessed whether older adults exploit static and dynamic emotional facial cues with a similar time course and in a similar fashion as younger adults. The rapid integration of multiple emotional cues (facial, pictorial and sentential) during incremental sentence processing seems particularly challenging, yet such integration appears to occur effortlessly in natural language interaction. Here we examine how this integration is achieved using a properly controlled experimental setting. To motivate our studies in more detail, we first review relevant literature on emotion processing, on the recognition of dynamic facial emotion expressions, and on emotion processing in young relative to older adults. Affective Words and Face-Word Emotion Priming Humans seem to attend more readily to emotional compared with neutral stimuli. For instance, participants in a study by Kissler, Herbert, Pyke, and Junghofer (2007) read words while their event-related brain potentials were measured. Positive and negative compared with neutral words elicited enhanced negative mean amplitude ERPs, peaking at around 250 ms after word onset. On the assumption that enhanced cortical potentials index increased attention, valenced relative to neutral information seems to immediately catch our attention (see e.g., Kissler & Keil, 2008 for evidence on endogenous saccades to emotional vs. neutral pictures; Nummenmaa, Hyona, & Calvo, 2006 for eye-tracking (but see the rather substantial literature on gesture interpretation)
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    3
    Citations
    NaN
    KQI
    []