Intelligent Agent Deception and the Influence on Human Trust and Interaction.

2021 
As robots and intelligent agents are given more complex cognitive capabilities, it is only appropriate to assume that they will be able to commit acts of deceit much more readily. And yet, not much attention has been given to investigating the effects that robot deception has on human interaction and their trust in the agent once the deception has been recognized. This paper examines how embodiment influences a person's trust of an intelligent agent that exhibits either deceptive or honest behavior that is either helpful or harmful in a financial scenario. Our results suggest that deceptive behavior decreases human trust no matter if the embodiment is physical or virtual and deceptive behavior decreases trust regardless of if the deception benefits the human. Moreover, it was found that trust levels slightly influence a person's punishment or reward strategies in addition to their desire to reuse the intelligent agent in the future. Although exposure to deception causes negative effects, the majority of participants still found deception permissible when it benefited them. Additionally, physically embodied robots were shown to mitigate the negative aftereffects of deception more than those that were virtually embodied. These results suggest that embodiment choice can have meaningful effects on the permissibility of deception conducted by intelligent agents.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []