Patch-Based attack on traffic sign recognition

2021 
Deep neural networks are found to be vulnerable to adversarial examples. These drawbacks can cause the security problem of machine vision systems. For automated driving, it leads to a more lethal safety-critical issue of the perception systems. In this paper, we propose an adversarial patch method to attack the recognition of traffic signs. On the dataset of GTSRB (the German Traffic Sign Recognition Benchmark), we train three classifiers with good generalization abilities: GTSRB-VGG16, GTSRB-ResNet34, and GTSRB-GoogLeNet. The crafted patches are robust and not limited by locations after the convergence of training under a wide variety of transformations and random locations. The attack success rate (ASR) can be achieved over 90% on GTSRB-ResNet34 and GTSRB-GoogLeNet. For GTSRB-VGG16 with a relatively more robust structure, the utilization of the ensemble method helps attack the classifier successfully. Our method shows the superior performance in the physical attack experiment which achieves over 70% ASR on GTSRB-ResNet34. We also give an analysis of the robustness of GTSRB-VGG16 and two findings can be highlighted: 1) The learning features of GTSRB-VGG16 are more explainable than the learning features of the other two models that contribute to the robustness; 2) GTSRB-VGG16 has more parameters which is potentially robust faced with the attack. Our code is available at https://github.com/yeibn999/Patch-attack-on-traffic-sign.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []