Dropping Activation Outputs With Localized First-Layer Deep Network for Enhancing User Privacy and Data Security

2018 
Deep learning methods can play a crucial role in anomaly detection, prediction, and supporting decision making for applications like personal health-care, pervasive body sensing, and so on. However, current architecture of deep networks suffers the privacy issue that users need to give out their data to the model (typically hosted in a server or a cluster on Cloud) for training or prediction. This problem is getting more severe for those sensitive health-care or medical data (e.g., fMRI or body sensors measures like EEG signals). In addition to this, there is also a security risk of leaking these data during the data transmission from user to the model (especially when it is through the Internet). Targeting at these issues, in this paper, we proposed a new architecture for deep network in which users do not reveal their original data to the model. In our method, feed-forward propagation and data encryption are combined into one process: we migrate the first layer of deep network to users’ local devices and apply the activation functions locally, and then use the “dropping activation output” method to make the output non-invertible. The resulting approach is able to make model prediction without accessing users’ sensitive raw data. The experiment conducted in this paper showed that our approach achieves the desirable privacy protection requirement and demonstrated several advantages over the traditional approach with encryption/decryption.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    2
    Citations
    NaN
    KQI
    []