Improving Human-Robot Interaction Through Explainable Reinforcement Learning

2019 
Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with the problems of determining how and when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple and static, and cannot adapt to changing environments we expect to deploy in modern systems [3], [4], [9], [11]. They are intrinsically limited in their ability to explain rationale versus merely listing their future behaviors, limiting a human's understanding of the system [2], [7]. Most probabilistic assessments of a task are conveyed after the task/skill is attempted rather than before [10], [14], [16]. This limits failure recovery and danger avoidance mechanisms. Existing work on predicting failures relies on sensors to accurately detect explicitly annotated and learned failure modes [13]. As such, important non-obvious pieces of information for assessing appropriate trust and/or course-of-action (COA) evaluation in collaborative scenarios can go overlooked, while irrelevant information may instead be provided that increases clutter and mental workload. Understanding how AI models arrive at specific decisions is a key principle of trust [8]. Therefore, it is critically important to develop new strategies for anticipating, communicating, and explaining justifications and rationale for AI driven behaviors via contextually appropriate semantics.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    18
    Citations
    NaN
    KQI
    []