What Do People Really Want When They Say They Want "Explainable AI?" We Asked 60 Stakeholders.

2020 
This paper summarizes findings from a qualitative research effort aimed at understanding how various stakeholders characterize the problem of Explainable Artificial Intelligence (Explainable AI or XAI). During a nine-month period, the author conducted 40 interviews and 2 focus groups. An analysis of data gathered led to two significant initial findings: (1) current discourse on Explainable AI is hindered by a lack of consistent terminology; and (2) there are multiple distinct use cases for Explainable AI, including: debugging models, understanding bias, and building trust. These uses cases assume different user personas, will likely require different explanation strategies, and are not evenly addressed by current XAI tools. This stakeholder research supports a broad characterization of the problem of Explainable AI and can provide important context to inform the design of future capabilities.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    10
    Citations
    NaN
    KQI
    []