Exploiting Adversarial Examples to Drain Computational Resources on Mobile Deep Learning Systems

2020 
In order to perform deep learning tasks everywhere, many optimizations have been proposed to address the resource limitations on mobile systems like IoTs. A key approach among others is to dynamically adjust computational resources of the deep learning inference according to the characteristics of incoming inputs. For example, one of popular optimizations is to pick for each input a suitable combination of computations with respect to its inference difficulty. However, we find out that such “dynamic routing” of computations could be exploited to drain/waste precious resources on mobile deep learning systems. In this work, we introduce a new deep learning attack dimension, the computational resources draining, and demonstrate its feasibility in one of possible attack manners, the adversarial examples of input data. We describe how to construct our special adversarial examples aiming to the resource draining, and show that these poisoned inputs are able to increase the computation loads on purpose with two experiment datasets. We hope that our findings can shed light on the path of improving the robustness of mobile deep learning optimizations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []