Bringing Differential Private SGD to Practice: On the Independence of Gaussian Noise and the Number of Training Rounds

2021 
In the context of DP-SGD each round communicates a local SGD update which leaks some new information about the underlying local data set to the outside world. In order to provide privacy, Gaussian noise is added to local SGD updates. However, privacy leakage still aggregates over multiple training rounds. Therefore, in order to control privacy leakage over an increasing number of training rounds, we need to increase the added Gaussian noise per local SGD update. This dependence of the amount of Gaussian noise $\sigma$ on the number of training rounds $T$ may impose an impractical upper bound on $T$ (because $\sigma$ cannot be too large) leading to a low accuracy global model (because the global model receives too few local SGD updates). DP-SGD much less competitive compared to other existing privacy techniques. We show for the first time that for $(\epsilon,\delta)$-differential privacy $\sigma$ can be chosen equal to $\sqrt{2(\epsilon +\ln(1/\delta))/\epsilon}$ regardless the total number of training rounds $T$. In other words, $\sigma$ does not depend on $T$ anymore (and aggregation of privacy leakage increases to a limit). This important discovery brings DP-SGD to practice because $\sigma$ can remain small to make the trained model have high accuracy even for large $T$ as usually happens in practice.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    0
    Citations
    NaN
    KQI
    []