Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence

2021 
Machine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications. Maloca et al. implement convolutional neural network (CNN) to automatically segment OCT images obtained from cynomolgus monkeys. The results are compared to annotations generated by human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    89
    References
    3
    Citations
    NaN
    KQI
    []