Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach

2021 
Adversarial images are imperceptible perturbations to mislead deep neural networks (DNNs), which have attracted great attention in recent years. Although several defense strategies achieved encouraging robustness against adversarial samples, most of them still failed to consider the robustness on common corruptions (e.g. noise, blur, and weather/digital effects). To address this problem, we propose a simple yet effective method, named Progressive Diversified Augmentation (PDA), which improves the robustness of DNNs by progressively injecting diverse adversarial noises during training. In other words, DNNs trained with PDA achieve better general robustness against both adversarial attacks and common corruptions than other strategies. In addition, PDA also enjoys the advantages of spending less training time and keeping high standard accuracy on clean examples. Further, we theoretically prove that PDA can control the perturbation bound and guarantee better robustness. Extensive results on CIFAR-10, SVHN, ImageNet, CIFAR-10-C and ImageNet-C have demonstrated that PDA comprehensively outperforms its counterparts on the robustness against adversarial examples and common corruptions as well as clean images. More experiments on the frequency-based perturbations and visualized gradients further prove that PDA achieves general robustness and is more aligned with the human visual system.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    0
    Citations
    NaN
    KQI
    []