Physical Transferable Attack against Black-box Face Recognition Systems

2021 
Recent studies have shown that machine learning models in general and deep neural networks like CNN, in particular, are vulnerable to adversarial attacks. Specifically, in terms of face recognition, one can easily deceive deep learning networks by adding a visually imperceptible adversarial perturbation to the input images. However, most of these works assume the ideal scenario where the attackers have perfect information about the victim model and the attack is performed in the digital domain, which is not a realistic assumption. As a result, these methods often poorly (or even impossible to) transfer to the real world. To address this issue, we propose a novel physical transferable attack method on deep face recognition systems that can work in real-world settings without any knowledge about the victim model. Our experiments on various state-of-the-art models with various architectures and training losses show non-trivial attack success rates. With the observed results, we believe that our method can enable further studies on improving adversarial robustness as well as security of deep face recognition systems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    0
    Citations
    NaN
    KQI
    []