Instrumental learning in social interactions: trait learning from faces and voices

2020 
Recent research suggests that reinforcement learning may underlie trait formation in social interactions with faces (Hackel, Doll, & Amodio, 2015; Hackel, Mende-Siedlecki, & Amodio, 2020). The current study investigated whether the same learning mechanisms could be engaged for trait learning from voices. On each trial of a training phase, participants (N = 192) chose from pairs of human or slot machine targets that varied in the 1) reward value and 2) generosity of their payouts. Targets were either auditory (voices or tones; Experiment 1) or visual (faces or icons; Experiment 2), and were presented sequentially before payout feedback. A test phase measured participant choice behaviour, and a post-test recorded their target preference ratings. For auditory targets, we found no effect of reward or generosity on target choices, but saw higher preference ratings for generous humans and slot machines. For visual targets, participants learned about both generosity and reward, but generosity was prioritised in the human condition. These findings demonstrate that (1) reinforcement learning of trait information with visual stimuli remains intact even when sequential presentation introduces a delay in feedback and (2) learning about traits and reward in such paradigms is weakened when auditory stimuli are used.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []