Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface

2021 
Recent automobiles use image sensors to take in the physical world information, and deep neural networks (DNNs) are used to recognize the surroundings to control the vehicles. Adversarial examples and backdoor attacks that induce misclassification by tampering with input images to DNN have been proposed as methods of attacking DNNs. As an example of attacks on DNNs equipped in automobiles, a method has been reported in which an adversarial mark is added to input images by physically putting a sticker on a road sign. However, the method reduces reproducibility due to the influence of the shooting environment. The tampering area needs to be increased to improve reproducibility. However, these large marks are easily seen by people. We propose a method of adding an adversarial mark for triggering backdoor attacks by a fault injection attack on the Mobile Industry Processor Interface (MIPI), which is the popular CMOS image sensor interface. This method can increase the reproducibility of attacks. We attack the MIPI to add adversarial marks to the input image. Two attack drivers are electrically connected to the MIPI data lane in our attack system. Most image signals are transferred from the sensor to the processor by canceling the attack signal. Then, the adversarial mark is added by activating the attack signal. An adversary aiming to carry out backdoor attacks mixes poison data, which consists of images tampered with adversarial marks at specific locations and of adversarial target classes, into a training dataset. The backdoor model classifies images with adversarial marks into an adversarial target class and other images into the correct classes. We conducted backdoor attack experiments with the MNIST dataset for handwritten digit recognition and the German Traffic Sign Recognition Benchmark (GTSRB) dataset for traffic sign recognition. Attacks on MNIST had a 91% success rate with 1% poison data, and attacks on GTSRB had a 92% success rate with 5.1% poison data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    0
    Citations
    NaN
    KQI
    []