Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings

07/12/2021
by   Raef Bassily, et al.
0

We study differentially private stochastic optimization in convex and non-convex settings. For the convex case, we focus on the family of non-smooth generalized linear losses (GLLs). Our algorithm for the ℓ_2 setting achieves optimal excess population risk in near-linear time, while the best known differentially private algorithms for general convex losses run in super-linear time. Our algorithm for the ℓ_1 setting has nearly-optimal excess population risk Õ(√(logd/n)), and circumvents the dimension dependent lower bound of [AFKT21] for general non-smooth convex losses. In the differentially private non-convex setting, we provide several new algorithms for approximating stationary points of the population risk. For the ℓ_1-case with smooth losses and polyhedral constraint, we provide the first nearly dimension independent rate, Õ(log^2/3d/n^1/3) in linear time. For the constrained ℓ_2-case, with smooth losses, we obtain a linear-time algorithm with rate Õ(1/n^3/10d^1/10+(d/n^2)^1/5). Finally, for the ℓ_2-case we provide the first method for non-smooth weakly convex stochastic optimization with rate Õ(1/n^1/4+(d/n^2)^1/6) which matches the best existing non-private algorithm when d= O(√(n)). We also extend all our results above for the non-convex ℓ_2 setting to the ℓ_p setting, where 1 < p ≤ 2, with only polylogarithmic (in the dimension) overhead in the rates.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset