Differentially Private Stochastic Gradient Descent with Low-Noise

09/09/2022
by   Puyu Wang, et al.
0

In this paper, by introducing a low-noise condition, we study privacy and utility (generalization) performances of differentially private stochastic gradient descent (SGD) algorithms in a setting of stochastic convex optimization (SCO) for both pointwise and pairwise learning problems. For pointwise learning, we establish sharper excess risk bounds of order 𝒪( √(dlog(1/δ))/nϵ) and 𝒪( n^- 1+α/2+√(dlog(1/δ))/nϵ) for the (ϵ,δ)-differentially private SGD algorithm for strongly smooth and α-Hölder smooth losses, respectively, where n is the sample size and d is the dimensionality. For pairwise learning, inspired by <cit.>, we propose a simple private SGD algorithm based on gradient perturbation which satisfies (ϵ,δ)-differential privacy, and develop novel utility bounds for the proposed algorithm. In particular, we prove that our algorithm can achieve excess risk rates 𝒪(1/√(n)+√(dlog(1/δ))/nϵ) with gradient complexity 𝒪(n) and 𝒪(n^2-α/1+α+n) for strongly smooth and α-Hölder smooth losses, respectively. Further, faster learning rates are established in a low-noise setting for both smooth and non-smooth losses. To the best of our knowledge, this is the first utility analysis which provides excess population bounds better than 𝒪(1/√(n)+√(dlog(1/δ))/nϵ) for privacy-preserving pairwise learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset