Secure Deep Neural Network Models Publishing Against Membership Inference Attacks Via Training Task Parallelism

2021 
Vast data and computing resources are commonly needed to train deep neural networks, causing an unaffordable price for individual users. Motivated by the increasing demands of deep learning applications, sharing well-trained models becomes popular. The owner of a pre-trained model can share it by publishing the model directly or providing a prediction interface. Either way, individual users can benefit from deep learning without much cost, and computing resources can be saved. However, recent studies of machine learning security have identified severe threats to these model publishing approaches. This paper will focus on the privacy leakage issue of publishing well-trained deep neural network models. To tackle this problem, we propose a series of secure model publishing solutions based on training task parallelism. Specifically, we show how to estimate private model parameters through parallel model training and generate new model parameters in a privacy-preserving manner to replace the original ones for publishing. Based on data parallelism and parameter generating techniques, we design another two solutions concentrating on model quality and parameter privacy, respectively. Through privacy leakage analysis and experimental attack evaluation, we conclude that deep neural network models published with our solutions can provide on-demand model quality guarantees and resist membership inference attacks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []