Experimental Study on Generating Multi-modal Explanations of Black-box Classifiers in terms of Gray-box Classifiers

2020 
Artificial Intelligence (AI) is a first class citizen in the cities of the 21st century. In addition, trust, fairness, accountability, transparency and ethical issues are considered as hot topics regarding AI-based systems under the umbrella of Explainable AI (XAI). In this paper we have conducted an experimental study with 15 datasets to validate the feasibility of using a pool of gray-box classifiers (i.e., decision trees and fuzzy rule-based classifiers) to automatically explain a black-box classifier (i.e., Random Forest). Reported results validate our approach. They confirm the complementarity and diversity among the gray-box classifiers under study, which are able to provide users with plausible multi-modal explanations of the considered black-box classifier for all given datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    2
    Citations
    NaN
    KQI
    []