An improved ShapeShifter method of generating adversarial examples for physical attacks on stop signs against Faster R-CNNs
2020
Abstract Vehicles have increasingly deployed object detectors to perceive running conditions, and deep learning networks have been widely adopted by those detectors. Growing neural networks have incurred severe attacks, like adversarial attacks, imposing threats to vehicle safety. Only if adversarial attacks are studied thoroughly can researchers think of better defence measures against them. However, most existing methods of generating an adversarial sample have focused on classification. Plus, stop signs in English have been a popular object to perform adversarial attacks while whether those in Chinese are likely to be attacked still remains a problem. In this paper, we proposed an improved ShapeShifter method to generate adversarial examples against Faster Region-Convolutional neural networks (Faster R-CNN) object detectors by adding white Gaussian noise to the optimization function of ShapeShifter. Experiments verify that the improved ShapeShifter method can successfully and effectively attack Faster R-CNNs for stop signs both in English and Chinese, which is much better than ShapeShifter under certain circumstances. Plus, it has better robustness and can overcome ShapeShifter's drawback of high requirements on photographic equipment.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
30
References
1
Citations
NaN
KQI