The effect of social-cognitive recovery strategies on likability, capability and trust in social robots
2020
Abstract As robots become more prevalent, particularly in complex public and domestic settings, they will be increasingly challenged by dynamic situations that could result in performance errors. Such errors can have a harmful impact on a user’s trust and confidence in the technology, potentially reducing use and preventing full realisation of its benefits. A potential countermeasure, based on social psychological concepts of trust, is for robots to demonstrate self-awareness and ownership of their mistakes to mitigate the impact of errors and increase users’ affinity towards the robot. We describe an experiment examining 326 people’s perceptions of a mobile guide robot that employs synthetic social behaviours to elicit trust in its use after error. We find that a robot that identifies its mistake, and communicates its intention to rectify the situation, is considered by observers to be more capable than one that simply apologises for its mistake. However, the latter is considered more likeable and, uniquely, increases people’s intention to use the robot. These outcomes highlight that the complex and multifaceted nature of trust in human–robot interaction may extend beyond established approaches considering robots’ capability in performance and indicate that social cognitive models are valuable in developing trustworthy synthetic social agents.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
86
References
8
Citations
NaN
KQI