Differentially Private Online-to-Batch for Smooth Losses

10/12/2022
by   Qinzi Zhang, et al.
0

We develop a new reduction that converts any online convex optimization algorithm suffering O(√(T)) regret into an ϵ-differentially private stochastic convex optimization algorithm with the optimal convergence rate Õ(1/√(T) + √(d)/ϵ T) on smooth losses in linear time, forming a direct analogy to the classical non-private "online-to-batch" conversion. By applying our techniques to more advanced adaptive online algorithms, we produce adaptive differentially private counterparts whose convergence rates depend on apriori unknown variances or parameter norms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset