Towards Explainability of non-Convolutional Neural Networks

2021 
Artificial intelligence has risen in popularity in research and applications in the past years. Explainability is a topic that has been proven to work on human interpretable data like images or sentences, but the research is narrow whenever such data is missing. Creating trust through explainability on raw data processing neural networks of any kind will become necessary in the future as networks are evolving further towards artificial general intelligence. This research is focused to visualize parts of the hidden layers instead of focusing explainability on the input data and is independent of the neural network’s size. We create a model that represents the neural network in a way that neurons that are activating on similar features are grouped together in structures. This model will be analyzed in a machine-learning equivalent process to identify parts of the network being responsible for a decision. In a further step we use the model to test the processing of raw sensor-data versus an approach of heatmapped explainability with a convolutional neural network. Relevant data points in the input are visualized by a common heatmap approach while the hidden layers are analyzed in this research and should point to structures that have a comparable function in the network. For example, if the heatmap highlights peaks of values, the model will be highlighted in the area that is observed as the activation of the neural network on peaks. We will provide the research of artificial general intelligence with a solution for explainability which is necessary for advanced research and the operation of such applications in complex or dangerous scenarios.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    0
    Citations
    NaN
    KQI
    []