Two-Way Feature-Aligned And Attention-Rectified Adversarial Training

2020 
Adversarial training increases robustness by augmenting training data with adversarial examples. However, vanilla adversarial training may be overfitting to certain adversarial attacks. Small perturbations in images bring in error which is gradually amplified when forwarded through the model so that the error leads to wrong classification. Besides, small perturbations will also distract classifier’s attention to significant features that are relevant to the true label. In this paper, we propose a novel two-way feature-aligned and attention-rectified adversarial training (FAAR) to improve adversarial training (AT). FAAR utilizes two-way feature alignment and attention rectification to mitigate the problems mentioned above. FAAR effectively suppresses perturbations in lowlevel, high-level and global features by moving features of perturbed images towards those of clean images with twoway feature alignment. It also leads the model into focusing more on useful features which are correlated with true label through rectifying gradient-weighted attention. Besides, feature alignment activates attention rectification by reducing perturbations in high-level feature. Our proposed method FAAR surpasses other existing AT methods in three aspects. First, it pushes the model to keep invariant when dealing with different adversarial attacks and different magnitude of perturbations. Second, it can be applied to any convolution neural networks. Third, the training process is end-to-end. For experiments, FAAR shows promising defense performance on CIFAR-10 and ImageNet.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []