How does Modality Matter? Investigating the Synthesis and Effects of Multi-modal Robot Behavior on Social Intelligence

2021 
Multi-modal behavior for social robots is crucial for the robot’s perceived social intelligence, ability to communicate nonverbally, and the extent to which the robot can be trusted. However, most of the research conducted so far has been with only one modality, thus there is still a lack of understanding of the effect of each modality when performed in a multi-modal interaction. This study presents a multi-modal interaction focusing on the following modalities: proxemics for social navigation, gaze mechanisms (for turn-taking floor-holding, turn-yielding and joint attention), kinesics (for symbolic, deictic, and beat gestures), and social dialogue. The multi-modal behaviors were evaluated through an experiment with 105 participants in a seven minute interaction to analyze the effects on perceived social intelligence through both objective and subjective measurements. The results show various insights of the effect of modalities in a multi-modal interaction onto several behavioral outcomes of the users, including taking physical suggestions, distances maintained during the interaction, wave gestures performed in greeting and closing, back-channeling, and how socially the robot is treated, while having no effect on self-disclosure and subjective liking.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    67
    References
    0
    Citations
    NaN
    KQI
    []