Visualization of Layers Within a Convolutional Neural Network Using Gradient Activation Maps
2020
Introduction: Convolutional neural networks (CNNs) are machine learning tools that have great potential in the field of medical imaging. However, it is often regarded as a “black box” as the process that is used by the machine to acquire a result is not transparent. It would be valuable to find a method to be able to understand how the machine comes to its decision. Therefore, the purpose of this study is to examine how effective gradient-weighted class activation mapping (grad-CAM) visualizations are for certain layers in a CNN-based dental x-ray artifact prediction model.
Methods: To tackle this project, Python code using PyTorch trained a CNN to classify dental plates as unusable or usable depending on the presence of artifacts. Furthermore, Python using PyTorch was also used to overlay grad-CAM visualizations on the given input images for various layers within the model. One image with seventeen different overlays of artifacts was used in this study.
Results: In earlier layers, the model appeared to focus on general features such as lines and edges of the teeth, while in later layers, the model was more interested in detailed aspects of the image. All images that contained artifacts resulted in the model focusing on more detailed areas of the image rather than the artifacts themselves. Whereas the images without artifacts resulted in the model focusing on the visualization of areas that surrounded the teeth.
Discussion and Conclusion: As subsequent layers examined more detailed aspects of the image as shown by the grad-CAM visualizations, they provided better insight into how the model processes information when it is making its classifications. Since all the images with artifacts showed similar trends in the visualizations of the various layers, it provides evidence to suggest that the location and size of the artifact does not affect the model’s pattern recognition and image classification.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
2
References
0
Citations
NaN
KQI