Generative adversarial network-based image completion to identify abnormal locations in digital breast tomosynthesis images

2020 
Deep learning has achieved great success in image analysis and decision making in radiology. However, a large amount of annotated imaging data is needed to construct well-performing deep learning models. A particular challenge in the context of breast cancer is the number of available cases that contain cancer, given the very low prevalence of the disease in the screening population. The question arises whether normal cases, which in the context of breast cancer screening are available in abundance, can be used to train a deep learning model that identifies locations that are abnormal. In this study, we propose to achieve this goal through the generative adversarial network (GAN)-based image completion. Our hypothesis is that if a generative network has a difficulty to correctly complete a part of an image at a certain location, then such a location is likely to represent an abnormality. We test this hypothesis using a dataset of 4348 patients with digital breast tomosynthesis (DBT) imaging from our institution. We trained our model on normal only images, to be able to fill in parts of images that were artificially removed. Then, using an independent test set, at different locations in the images, we measured how difficult it was for the network to reconstruct an artificially removed patch of the image. The difficulty was measured by mean squared error (MSE) between the original removed patch and the reconstructed patch. On average, the MSE was 2.11 times higher (with standard deviation equal to 1.01) at the locations containing expert-annotated cancerous lesions than that at the locations outside those abnormal locations. Our generative approach demonstrates a great potential for using this model to aid breast cancer detection.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []