Reviewing and Improving the Gaussian Mechanism for Differential Privacy

11/27/2019
by   Jun Zhao, et al.
8

Differential privacy provides a rigorous framework to quantify data privacy, and has received considerable interest recently. A randomized mechanism satisfying (ϵ, δ)-differential privacy (DP) roughly means that, except with a small probability δ, altering a record in a dataset cannot change the probability that an output is seen by more than a multiplicative factor e^ϵ. A well-known solution to (ϵ, δ)-DP is the Gaussian mechanism initiated by Dwork et al. [1] in 2006 with an improvement by Dwork and Roth [2] in 2014, where a Gaussian noise amount √(2ln2/δ)×Δ/ϵ of [1] or √(2ln1.25/δ)×Δ/ϵ of [2] is added independently to each dimension of the query result, for a query with ℓ_2-sensitivity Δ. Although both classical Gaussian mechanisms [1,2] assume 0 < ϵ≤ 1, our review finds that many studies in the literature have used the classical Gaussian mechanisms under values of ϵ and δ where the added noise amounts of [1,2] do not achieve (ϵ,δ)-DP. We obtain such result by analyzing the optimal noise amount σ_DP-OPT for (ϵ,δ)-DP and identifying ϵ and δ where the noise amounts of classical mechanisms are even less than σ_DP-OPT. Since σ_DP-OPT has no closed-form expression and needs to be approximated in an iterative manner, we propose Gaussian mechanisms by deriving closed-form upper bounds for σ_DP-OPT. Our mechanisms achieve (ϵ,δ)-DP for any ϵ, while the classical mechanisms [1,2] do not achieve (ϵ,δ)-DP for large ϵ given δ. Moreover, the utilities of our mechanisms improve those of [1,2] and are close to that of the optimal yet more computationally expensive Gaussian mechanism.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset