Self-Supervision vs. Transfer Learning: Robust Biomedical Image Analysis Against Adversarial Attacks

2020 
Deep neural networks are being increasingly used for disease diagnosis and lesion localization on biomedical images. However, training deep neural networks not only requires large sets of expensive ground truth (image labels or pixel annotations); they are also susceptible to adversarial attacks. Transfer learning alleviates the former problem to some extent, wherein the lower layers of a neural network are pre-trained on a large labeled dataset from a different domain (e.g., ImageNet), while only the upper layers are fine-tuned on the target domain (e.g., chest X-rays). An alternative to transfer learning is self-supervised learning, in which a supervised task is created using the unlabeled images from the target domain itself to pre-train the lower layers. In this work, we show that self-supervised learning combined with adversarial training offers additional advantages over transfer learning as well as vanilla self-supervised learning. In particular, the process of adversarial training itself acts as data augmentation for self-supervision. This adversarial data augmentation leads to both a reduction in the amount of supervised data required for comparable accuracy, as well as natural robustness to adversarial attacks. We support our claims using experiments on the two modalities and tasks – classification of chest X-rays, and segmentation of MRI images.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    3
    Citations
    NaN
    KQI
    []