ABSTRACT: Cloud-based Image Classification Service is Not Robust to Affine Transformation: A Forgotten Battlefield

2019 
Many recent works demonstrated that Deep Learning models are vulnerable to adversarial examples.Fortunately, generating adversarial examples usually requires white-box access to the victim model, and the attacker can only access the APIs opened by cloud platforms. Thus, keeping models in the cloud can usually give a (false) sense of security. Unfortunately, cloud-based image classification service is not robust to Affine Transformation such as translation, rotation, scaling and shearing. In this paper,(1) we make the first attempt to conduct an extensive empirical study of Affine Transformation (AT) attacks against main stream real-world cloud-based classification services. Through evaluations on three popular cloud platforms including Amazon, Google and Microsoft, we demonstrate that AT attack can reduce the top-1 accuracy from approximately 100% to 30% among different classifier services. (2) We propose two defense algorithms to address these security challenges.Experiments show that our defense technology can effectively defend AT attack, we improve the top-1 accuracy of state-of-the-art models from 50% to approximately 90%. (3)We visualize the process of attack and defense from the perspective of convolutional neural network.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []