Visual Interpretability in Computer-Assisted Diagnosis of Thyroid Nodules Using Ultrasound Images

2020 
BACKGROUND The number of studies on deep learning in artificial intelligence (AI)-assisted diagnosis of thyroid nodules is increasing. However, it is difficult to explain what the models actually learn in artificial intelligence-assisted medical research. Our aim is to investigate the visual interpretability of the computer-assisted diagnosis of malignant and benign thyroid nodules using ultrasound images. MATERIAL AND METHODS We designed and implemented 2 experiments to test whether our proposed model learned to interpret the ultrasound features used by ultrasound experts to diagnose thyroid nodules. First, in an anteroposterior/transverse (A/T) ratio experiment, multiple models were trained by changing the A/T ratio of the original nodules, and their classification, accuracy, sensitivity, and specificity were tested. Second, in a visualization experiment, class activation mapping used global average pooling and a fully connected layer to visualize the neural network to show the most important features. We also examined the importance of data preprocessing. RESULTS The A/T ratio experiment showed that after changing the A/T ratio of the nodules, the accuracy of the neural network model was reduced by 9.24-30.45%, indicating that our neural network model learned the A/T ratio information of the nodules. The visual experiment results showed that the nodule margins had a strong influence on the prediction of the neural network. CONCLUSIONS This study was an active exploration of interpretability in the deep learning classification of thyroid nodules. It demonstrated the neural network-visualized model focused on irregular nodule margins and the A/T ratio to classify thyroid nodules.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []