Privacy-Utility Trade-Offs Against Limited Adversaries.
2021
We study privacy-utility trade-offs where users share privacy-correlated useful information with a service provider to obtain some utility. The service provider is adversarial in the sense that it can infer the users' private information based on the shared useful information. To minimize the privacy leakage while maintaining a desired level of utility, the users carefully perturb the useful information via a probabilistic privacy mapping before sharing it. We focus on the setting in which the adversary attempting an inference attack on the users' privacy has potentially biased information about the statistical correlation between the private and useful variables. This information asymmetry between the users and the limited adversary leads to better privacy guarantees than the case of the omniscient adversary under the same utility requirement. We first identify assumptions on the adversary's information so that the inference costs are well-defined and finite. Then, we characterize the impact of the information asymmetry and show that it increases the inference costs for the adversary. We further formulate the design of the privacy mapping against a limited adversary using a difference of convex functions program and solve it via the concave-convex procedure. When the adversary's information is not precisely available, we adopt a Bayesian view and represent the adversary's information by a probability distribution. In this case, the expected cost for the adversary does not admit a closed-form expression, and we establish and maximize a lower bound of the expected cost. We provide a numerical example regarding a census data set to illustrate the theoretical results.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
35
References
0
Citations
NaN
KQI