Towards Privacy-Preserving Aggregated Prediction from SPDZ

2020 
Machine learning models trained on data collected from multiple parties can offer prediction services to clients. However, it raises privacy concerns for both model owners and clients. The models may disclose the details of the training data inadvertently by querying and the clients’ private input may be obtained by service providers. In this work, a privacy-preserving aggregated prediction framework is proposed, which combines two privacy-preserving techniques, i.e., Differential Privacy and Secret Sharing, to ensure privacy. Specifically, individual parties first train local models that meet differential privacy. Then two non-colluding servers collect the shares of multi-party trained models and clients’ inputs independently and provide online prediction. Finally, the clients reconstruct prediction from servers and aggregate them into the final prediction. It is worth mentioning that during prediction phase, no one can obtain the private information of others. We evaluate the performance of our framework on MNIST dataset. The experimental results show that the framework can strike a balance between utility and privacy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []