Local Migration Model of Images Based on Deep Learning against Adversarial Attacks

2021 
Deep Neural Networks (DNNs) have achieved remarkable results in various tasks. However, DNNs are easily deceived by small input disturbances, which are called adversarial attacks. The adversarial attack is to deliberately add some subtle interference that humans cannot detect to the input sample, causing the model to give a wrong output with high confidence. Deep-Learning-as-a-Service (DLaaS) has become a current hot trend, and it also introduces challenging security issues. Therefore, in this paper, we propose a local migration model of confrontational attack images based on deep learning. The confrontational examples of the physical world are disguised as natural styles through the migration model to deceive human observers. Specifically, the model converts the small counter-interference into a specific pattern, and then camouflages the foreground or background or local target area of the image to achieve a high degree of invisibility. Due to the flexibility of the interference setting of this method, it can be used to help DNNs assess their robustness, and it can be used to achieve privacy protection and data security detection.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []