Design of Algorithms under Policy-Aware Local Differential Privacy: Utility-Privacy Trade-offs
Local differential privacy (LDP) enables private data sharing and analytics without the need of a trusted data collector. Error-optimal primitives (for, e.g., estimating means and item frequencies) under LDP have been well studied. For analytical tasks such as range queries, however, the error is often dependent on the domain size of private data, which is potentially prohibitive. This deficiency is inherent as LDP protects the same level of indistinguishability between any pair of private data values. In this paper, we investigate a policy-aware extension of eps-LDP, where a policy is customizable and defines heterogeneous privacy guarantees for different pairs of private data values. The policy provides another knob besides eps to tune utility-privacy trade-offs in analytical workloads. We show that, under realistic relaxed LDP policies, for analytical workloads such as linear counting queries, multi-dimensional range queries, and quantile queries, we can achieve significant gains in utility. In particular, for range queries under relaxed LDP, we design mechanisms with errors independent on the domain sizes; instead, their errors depends on the policy, which specifies in what granularity the private data is protected. We believe that the primitives we design for policy-aware LDP will be useful in developing mechanisms for other non-trivial analytical tasks with lower errors, and encourage the adoption of LDP in practice.
READ FULL TEXT