Making deep neural networks right for the right scientific reasons by interacting with their explanations

2020 
Deep neural networks have demonstrated excellent performances in many real-world applications. Unfortunately, they may show Clever Hans-like behaviour (making use of confounding factors within datasets) to achieve high performance. In this work we introduce the novel learning setting of explanatory interactive learning and illustrate its benefits on a plant phenotyping research task. Explanatory interactive learning adds the scientist into the training loop, who interactively revises the original model by providing feedback on its explanations. Our experimental results demonstrate that explanatory interactive learning can help to avoid Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust in the underlying model. Deep learning approaches can show excellent performance but still have limited practical use if they learn to predict based on confounding factors in a dataset, for instance text labels in the corner of images. By using an explanatory interactive learning approach, with a human expert in the loop during training, it becomes possible to avoid predictions based on confounding factors.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    66
    References
    38
    Citations
    NaN
    KQI
    []