Differentially Private Online-to-Batch for Smooth Losses
We develop a new reduction that converts any online convex optimization algorithm suffering O(√(T)) regret into an ϵ-differentially private stochastic convex optimization algorithm with the optimal convergence rate Õ(1/√(T) + √(d)/ϵ T) on smooth losses in linear time, forming a direct analogy to the classical non-private "online-to-batch" conversion. By applying our techniques to more advanced adaptive online algorithms, we produce adaptive differentially private counterparts whose convergence rates depend on apriori unknown variances or parameter norms.
READ FULL TEXT