General Frameworks for Anomaly Detection Explainability: Comparative Study

2021 
Since their inception, AutoEncoders have been very important in representational learning. They have achieved ground-breaking results in the realm of automated unsupervised anomaly detection for various critical applications. However, anomaly detection through AutoEncoders suffers from lack of transparency when it comes to decision making based on the outputs of the AutoEncoder network, especially for image-based models. Though the residual reconstruction error map from the AutoEncoder helps explaining anomalies to a certain extent, it is not a good indicator of the implicitly learnt attributes by the model. A human interpretable explanation of why an instance is anomalous not only enables the experts to fine-tune the model but also establishes and increases trust by non-expert users of the model. Convolutional AutoEncoders in particular suffer the most as there are only limited studies that focus on transparency and explainability. In this paper, aiming to bridge this gap, we explore the feasibility and compare the performances of several State-of-the-Art Explainable Artificial Intelligence (XAI) frameworks on Convolutional AutoEncoders. The paper also aims at providing the basis for future developments of reliable and trustworthy AutoEncoders for visual anomaly detection.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    0
    Citations
    NaN
    KQI
    []