Explaining Failure: Investigation of Surprise and Expectation in CNNs

2020 
Ai Convolutional Neural Networks (CNNs) have expanded into everyday use, more rigorous methods of explaining their inner workings are required. Current popular techniques, such as saliency maps, show how a network interprets an input image at a simple level by scoring pixels according to their importance. In this paper, we introduce the concept of surprise and expectation as means for exploring and visualising how a network learns to model the training data through the understanding of filter activations. We show that this is a powerful technique for understanding how the network reacts to an unseen image compared to the training data. We also show that the insights provided by our technique allows us to "fix" misclassifica- tions. Our technique can be used with nearly all types of CNN. We evaluate our method both qualitatively and quantitatively using ImageNet.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    1
    Citations
    NaN
    KQI
    []