Towards Linking CNN Decisions with Cancer Signs for Breast Lesion Classification from Ultrasound Images

2021 
Convolutional neural networks have shown outstanding object recognition performance, especially for visual recognition tasks such as tumor classification in 2D ultrasound (US) images. In Computer-Aided Diagnosis (CAD) systems, interpreting CNN’s decision is crucial for accepting the system in the clinical use. This paper is concerned with ‘visual explanations’ for decisions from CNN models trained on ultrasound images. In particular, we investigate the link between the CNN decision and the calcification cancer sign in breast lesion classification task. To this end, we study the output visualization of two different breast lesion recognition CNN models in two folds: Firstly, we explore two existing visualization approaches, Grad-CAM and CRM, to gain insight into the function of feature layers. Secondly, we introduce an adaptive Grad-CAM, called EGrad-CAM, which uses information entropy to freeze feature maps with no or minimal information. Extensive analysis and experiments using 1624 US images and two breast classification models show that calcification feature contributes to the CNN classification decision for both malignant and benign lesions. Furthermore, we show many feature maps in the final convolution layer are not contributing to the CNN decision, and our EGrad-CAM produces similar visualization output to Grad-CAM using 24%–87% of the feature maps. Our study demonstrates that the CNN decision visualization is a promising direction for bridging the gap between CNN classification decision of US images of breast lesions and cancer signs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []