Explainable AI and the philosophy and practice of explanation

2020 
Abstract Considerations of the nature of explanation and the law are brought together to argue that computed accounts of AI systems’ outputs cannot function on their own as explanations of decisions informed by AI. The important context for this inquiry is set by Article 22(3) of GDPR. The paper looks at the question of what an explanation is from the point of view of the philosophy of science – i.e. it asks not what counts as explanatory in legal terms, or what an AI system might compute using provenance metadata, but rather what explanation as a social practice consists in, arguing that explanation is an illocutionary act, and that it should be considered as a process, not a text. It cannot therefore be computed, although computed accounts of AI systems are likely to be important inputs to the explanatory process.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    4
    Citations
    NaN
    KQI
    []