Attribution Modeling for Deep Morphological Neural Networks using Saliency Maps

2021 
Mathematical morphology has been explored in deep learning architectures, as a substitute to convolution, for problems like pattern recognition and object detection. One major advantage of using morphology in deep learning is the utility of morphological erosion and dilation. Specifically, these operations naturally embody interpretability due to their underlying connections to the analysis of geometric structures. While the use of these operations results in explainable learned filters, morphological deep learning lacks attribution modeling, i.e., a paradigm to specify what areas of the original observed image are important. Furthermore, convolution-based deep learning has achieved attribution modeling through a variety of neural eXplainable Artificial Intelligence (XAI) paradigms (e.g., saliency maps, integrated gradients, guided backpropagation, and gradient class activation mapping). Thus, a problem for morphology-based deep learning is that these XAI methods do not have a morphological interpretation due to the differences in the underlying mathematics. Herein, we extend the neural XAI paradigm of saliency maps to morphological deep learning, and by doing, so provide an example of morphological attribution modeling. Furthermore, our qualitative results highlight some advantages of using morphological attribution modeling.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []