Privacy-Preservation in Distributed Deep Neural Networks via Encryption of Selected Gradients
2020
In this paper, privacy preservation in Distributed Deep Neural Networks (DDNN) is re-visited. We focus on the use of Homomorphic Encryption (HE) in the DDNN. Although HE provides some level of effective way to bring privacy to the DDNN, its use is usually associated with high overheads, especially communication cost between the Honest-but-Curious (HbC) cloud server and participants. To this end, we propose an alternative approach for privacy preservation in the DDNN. Ability to select and encrypt selected gradients homomorphically, is the main appeal of this paper and has proven to provide good privacy, reduce communication cost to a considerable level (ie. 0.04MB and 0.13MB when 1% and 10% of total gradients are selected, encrypted and shared respectively) whilst maintaining acceptable level of accuracy. A learning with error (LWE) based additive Homomorphic Encryption is used and comparison is made with closely related works to show how efficient our approach is. Furthermore, we propose the use of non-malleable code to protect the integrity of ciphertexts that are transmitted between participants and the HbC cloud server. This paper is the first to combine partial sharing and Homomorphic Encryption in the DDNN.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
13
References
0
Citations
NaN
KQI