DEAttack: A differential evolution based attack method for the robustness evaluation of medical image segmentation

2021 
Abstract Deep learning is an effective tool to assist doctors with many time-consuming and error-prone medical image analytical tasks. However, deep models are shown to be vulnerable to adversarial attacks, posing significant challenges to clinical applications. Existing works regarding the robustness of deep learning models are scarce, where most of them focus on the attack of medical image classification models. In this paper, a differential evolution attack (DEAttack) method is proposed to generate adversarial examples for medical image segmentation models. Our method does not require extra information such as the network’s structures and weights compared with the most widely investigated gradient-based attack methods. Additionally, benefit from the embedded differential evolution algorithm, which can preserve diversities of the optimization space. The proposed method can achieve better results than gradient-based methods, which can successfully attack the segmentation model with only perturbing a small fraction of the image pixels, demonstrating that the medical image segmentation model is more susceptible to adversarial examples. In addition to evaluating model robustness attack with public datasets, our DEAttack method was also tested on the clinical diagnostic dataset, demonstrating its superior performance and elegant process for the robustness evaluation of deep models in medical image segmentation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []