Compound adversarial examples in deep neural networks

2022 
Although deep learning has made great progress in many fields, they are still vulnerable to adversarial examples. Many methods for generating adversarial examples have been proposed, which either contain adversarial perturbation or patch. In this paper, we explore the method that creates compound adversarial examples including both perturbation and patch. We show that fusing two weak attack modes can produce more powerful adversarial examples, where the patch covers only of the pixels at random location in the image, and the perturbation changes only by 2/255 in the original pixel value (scale to 0–1). For both targeted attack and untargeted attack, compound attack can improve the generative efficiency of adversarial examples, and can attain higher attack success rate with fewer iteration steps. The compound adversarial examples successfully attack the models with defensive mechanisms that previously can defend perturbation attack or patch attack. Furthermore, the compound adversarial examples show good transferability on normal trained classifiers and adversarial trained classifiers. Experimental results on a series of widely used classifiers and defense models show that the proposed compound adversarial examples have strong robustness, high effectiveness, and good transferability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []