Local Differential Privacy for Deep Learning
2020
The Internet of Things (IoT) is transforming major industries, including but not limited to healthcare, agriculture, finance, energy, and transportation. IoT platforms are continually improving with innovations, such as the amalgamation of software-defined networks (SDNs) and network function virtualization (NFV) in the edge-cloud interplay. Deep learning (DL) is becoming popular due to its remarkable accuracy when trained with a massive amount of data such as generated by IoT. However, DL algorithms tend to leak privacy when trained on highly sensitive crowd-sourced data such as medical data. The existing privacy-preserving DL algorithms rely on the traditional server-centric approaches requiring high processing powers. We propose a new local differentially private (LDP) algorithm named LATENT that redesigns the training process. LATENT enables a data owner to add a randomization layer before data leave the data owners’ devices and reach a potentially untrusted machine learning service. This feature is achieved by splitting the architecture of a convolutional neural network (CNN) into three layers: 1) convolutional module (CNM); 2) randomization module; and 3) fully connected module. Hence, the randomization module can operate as an NFV privacy preservation service in an SDN-controlled NFV, making LATENT more practical for IoT-driven cloud-based environments compared to existing approaches. The randomization module employs a newly proposed LDP protocol named utility enhancing randomization, which allows LATENT to maintain high utility compared to existing LDP protocols. Our experimental evaluation of LATENT on convolutional deep neural networks demonstrates excellent accuracy (e.g., 91%–96%) with high model quality even under low privacy budgets (e.g., $\varepsilon =0.5$ ).
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
49
References
55
Citations
NaN
KQI