Theory of mind improves human's trust in an iterative human-robot game

2021 
Trust is a critical issue in human–robot interactions as it is at the base of the establishment of solid relationships. Theory of Mind (ToM) is the cognitive skill that allows us to understand what others think and believe. Several studies in HRI and psychology suggest that trust and ToM are interdependent concepts since we trust another agent based on our representation of its actions, beliefs, and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we aim to examine whether the perception of ToM abilities on a robotic agent influences human-robot trust over time in an iterative game scenario. To this end, participants played an Investment Game with a humanoid robot (Pepper) that was presented as having either low-level ToM or high-level ToM. During the game, the participants were asked to pick a sum of money to invest in the robot. The amount invested was used as the main measurement of human-robot trust. Our experimental results show that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    0
    Citations
    NaN
    KQI
    []