Towards Invisible Adversarial Examples against DNN-based Privacy Leakage for Internet of Things

2020 
Deep neural networks (DNNs) can be utilized maliciously for compromising the privacy stored in electronic devices, e.g., identifying the images stored in a mobile phone connected to the Internet of Things (IoT). However, recent studies demonstrated that DNNs are vulnerable to adversarial examples, which are artificially designed perturbations in the original samples for misleading DNNs. Adversarial examples can be used to protect the DNN-based privacy leakage in mobile phones by replacing the photos with adversarial examples. To avoid affecting the normal use of photos, the adversarial examples need to be highly similar to original images. To handle a large number of photos stored in the devices at a proper time, the time efficiency of a method needs to be high enough. Previous methods cannot do well on both sides. In this article, we propose a broad class of selective gradient sign iterative algorithms to make adversarial examples useful in protecting the privacy of photos in IoT devices. By neglecting the unimportant image pixels in the iterative process of attacks according to the sort of first-order partial derivative, we control the optimization direction meticulously to reduce image distortions of adversarial examples without leveraging high time-consuming tricks. Extensive experimental results show that the proposed methods successfully fool the neural network classifiers for the image classification task with a small change in the visual effects and consume little calculating time simultaneously.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    3
    Citations
    NaN
    KQI
    []