Reverse active learning based atrous DenseNet for pathological image classification

2019 
Due to the recent advances in deep learning, this model attracted researchers who have applied it to medical image analysis. However, pathological image analysis based on deep learning networks faces a number of challenges, such as the high resolution (gigapixel) of pathological images and the lack of annotation capabilities. To address these challenges, we propose a training strategy called deep-reverse active learning (DRAL) and atrous DenseNet (ADN) for pathological image classification. The proposed DRAL can improve the classification accuracy of widely used deep learning networks such as VGG-16 and ResNet by removing mislabeled patches in the training set. As the size of a cancer area varies widely in pathological images, the proposed ADN integrates the atrous convolutions with the dense block for multiscale feature extraction. The proposed DRAL and ADN are evaluated using the following three pathological datasets: BACH, CCG, and UCSB. The experiment results demonstrate the excellent performance of the proposed DRAL + ADN framework, achieving patch-level average classification accuracies (ACA) of 94.10%, 92.05% and 97.63% on the BACH, CCG, and UCSB validation sets, respectively. The DRAL + ADN framework is a potential candidate for boosting the performance of deep learning models for partially mislabeled training datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    34
    References
    28
    Citations
    NaN
    KQI
    []