Estimating Uncertainty in Deep Learning for Reporting Confidence to Clinicians when Segmenting Nuclei Image Data

2019 
Deep Learning, which involves powerful black box predictors, has achieved a state-of-the-art performance in medical image analysis such as segmentation and classification for diagnosis. However, in spite of these successes, these methods focus exclusively on improving the accuracy of point predictions without assessing the quality of their outputs. Knowing how much confidence there is in a prediction is essential for gaining clinicians' trust in the technology. Monte-Carlo dropout in neural networks is equivalent to a specific variational approximation in Bayesian neural networks and is simple to implement without any changes in the network architecture. It is considered state-of-the-art for estimating uncertainty. However, in classification, it does not model the predictive probabilities. This means that we are not capturing the true underlying uncertainty in the prediction. In this paper, we propose an uncertainty estimation framework for classification by decomposing predictive probabilities into two main types of uncertainty in Bayesian modelling: aleatoric and epistemic uncertainty (representing uncertainty in the quality of the data and in the model parameters, respectively). We demonstrate that the proposed uncertainty quantification framework using the Bayesian Residual U-Net (BRUNet) provides additional insight for clinicians when analysing images with help from deep learners. In addition, we demonstrate how the resulting uncertainty depends on the dropout rates using images from nuclei in divergent medical images.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    12
    Citations
    NaN
    KQI
    []