UD-MIL: Uncertainty-driven Deep Multiple Instance Learning for OCT Image Classification.

2020 
Deep learning has achieved remarkable success in the optical coherence tomography (OCT) image classification task with substantial labelled B-scan images available. However, obtaining such fine-grained expert annotations is usually quite difficult and expensive. How to leverage the volume-level labels to develop a robust classifier is very appealing. In this paper, we propose a weakly deep supervised learning framework with uncertainty estimation to address the macula-related disease classification problem from OCT images with only volume-level label being available. First, a convolutional neural network (CNN) based instance-level classifier is iteratively refined by using the proposed uncertainty-driven deep multiple instance learning scheme. To our best knowledge, we are the first to incorporate the uncertainty evaluation mechanism into multiple instance learning (MIL) for training a robust instance classifier. The classifier is able to detect suspicious abnormal instances and abstract the corresponding deep embedding with high representation capability simultaneously. Second, a recurrent neural network (RNN) takes instance features from the same bag as input and generates the final bag-level prediction by considering the individually local instance information and globally aggregated bag-level representation. For more comprehensive validation, we built two large diabetic macular edema (DME) OCT datasets from different devices and imaging protocols to evaluate the efficacy of our method}, which are composed of 30,151 B-scans in 1,396 volumes from 274 patients (Heidelberg-DME dataset) and 38,976 B-scans in 3,248 volumes from 490 patients (Triton-DME dataset), respectively. We compare the proposed method with the state-of-the-art approaches, and experimentally demonstrate that our method is superior to alternative methods, achieving volume-level accuracy, F1-score and area under receiver operating characteristic curve (AUC) of 95.1%, 93.9% and 99.0% on Heidelberg-DME and those of 95.1%, 93.5% and 98.6% on Triton-DME, respectively. Furthermore, the proposed method also yields competitive results on another public age-related macular degeneration OCT dataset, indicating the high potential as an effective screening tool in the clinical practice.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    23
    Citations
    NaN
    KQI
    []