Defense against Adversarial Vision Perturbations via Subspace Diagnosis

2019 
Deep neural networks are powerful learning architectures in the field of industrial applications, especially for image analysis. However, recent studies have found that deep learning paradigms are vulnerable to input samples crafted with adversarial perturbations. Adversarial samples are quasi-imperceptible to humans but can easily fool those deep models in the deploying phases, which may raise high risks in safety and security critical conditions. In view of that, this work proposes a novel real-time detection countermeasure called subspace diagnosis. First, principal component analysis has been utilized for feature subspace representations of images. After then, all extracted dimensions are divided into multiple clusters with a predefined scheme for the generation of unified monitoring statistics in various feature subspaces. For deciding the normal boundaries, Gaussian mixture model is considered and the Bayesian inference mechanism has been developed for adversarial example detection. Finally, to evaluate impacts from various attacks quantitatively, a subspace contribution index is further constructed for multi-space perturbation diagnosis. The effectiveness of the entire diagram has been extensively demonstrated on several datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []