Topology-theoretic approach to address attribute linkage attacks in differential privacy

2022 
Abstract Differential Privacy (DP) is well-known for its strong privacy guarantee. Briefly speaking, DP algorithms guarantee that the statistical information of the data is roughly preserved, and at the same time, individual privacy is protected with guarantees. However, when there are correlations among the attribute in the dataset, only relying on DP is not sufficient to defend against the attribute linkage attack, which is a well-known privacy attack aiming at deducing individuals’ private information. In the attribute linkage attack, the adversary can leverage prior knowledge about the victim, combined with accessing the published dataset, to infer sensitive information about a victim. In this paper, we study the attribute linkage attack in DP settings, and argue that enhancing DP can give users a higher level of privacy guarantees. Our contributions are ➀ we show that the attribute linkage attack can be initiated with high probability under the protection of DP, ➁ we propose a variant of DP called APL-Free ϵ -DP to provide a higher level of privacy guarantees, ➂ we design an algorithm APLKiller which satisfies the APL-Free ϵ -DP. Finally, experiments show that our algorithm not only eliminates the attribute linkage attack, and at the same time, it has a better ability to extract useful information from the data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    0
    Citations
    NaN
    KQI
    []