Toward Evaluating Re-identification Risks in the Local Privacy Model

2020 
LDP (Local Differential Privacy) has recently attracted much attention as a metric of data privacy that prevents the inference of personal data from obfuscated data in the local model. However, there are scenarios in which the adversary needs to perform re-identification attacks to link the obfuscated data to users in this model. LDP can cause excessive obfuscation and destroy the utility in these scenarios, because it is not designed to directly prevent re-identification. In this paper, we propose a privacy metric which we call the PIE (Personal Information Entropy). The PIE is designed so that it directly prevents re-identification attacks in the local model. The PIE can also be used to compare the identifiability of personal data with the identifiability of biometric data such as a fingerprint and face. We analyze the relation between LDP and the PIE, and analyze the PIE and utility in distribution estimation for two obfuscation mechanisms providing LDP. Through experiments, we reveal the fact that a location trace is more identifiable than the best face matcher in the prize challenge. Then we also show that the PIE can be used to guarantee low re-identification risks for the local obfuscation mechanisms while keeping high utility.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    83
    References
    0
    Citations
    NaN
    KQI
    []