Image Disguising for Protecting Data and Model Confidentiality in Outsourced Deep Learning

2021 
Large training data and expensive model tweaking are common features of deep learning development for images. As a result, data owners often utilize cloud resources or machine learning service providers for developing large-scale complex models. This practice, however, raises serious privacy concerns. Existing solutions are either too expensive to be practical, or do not sufficiently protect the confidentiality of data and model. In this paper, we aim to achieve a better trade-off among the level of protection for outsourced DNN model training, the expenses, and the utility of data, using novel image disguising mechanisms. We design a suite of image disguising methods that are efficient to implement and then analyze them to understand multiple levels of tradeoffs between data utility and protection of confidentiality. The experimental evaluation shows the surprising ability of DNN modeling methods in discovering patterns in disguised images and the flexibility of these image disguising mechanisms in achieving different levels of resilience to attacks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    0
    Citations
    NaN
    KQI
    []