Differentially Private Empirical Risk Minimization in Non-interactive Local Model via Polynomial of Inner Product Approximation
In this paper, we study the Empirical Risk Minimization problem in the non-interactive Local Differential Privacy (LDP) model. First, we show that for the hinge loss function, there is an (ϵ, δ)-LDP algorithm whose sample complexity for achieving an error of α is only linear in the dimensionality p and quasi-polynomial in other terms. Then, we extend the result to any 1-Lipschitz generalized linear convex loss functions by showing that every such function can be approximated by a linear combination of hinge loss functions and some linear functions. Finally, we apply our technique to the Euclidean median problem and show that its sample complexity needs only to be quasi-polynomial in p, which is the first result with a sub-exponential sample complexity in p for non-generalized linear loss functions. Our results are based on a technique, called polynomial of inner product approximation, which may be applicable to other problems.
READ FULL TEXT