Multi-User Privacy Mechanism Design with Non-zero Leakage

11/28/2022
by   Amirreza Zamani, et al.
0

A privacy mechanism design problem is studied through the lens of information theory. In this work, an agent observes useful data Y=(Y_1,...,Y_N) that is correlated with private data X=(X_1,...,X_N) which is assumed to be also accessible by the agent. Here, we consider K users where user i demands a sub-vector of Y, denoted by C_i. The agent wishes to disclose C_i to user i. Since C_i is correlated with X it can not be disclosed directly. A privacy mechanism is designed to generate disclosed data U which maximizes a linear combinations of the users utilities while satisfying a bounded privacy constraint in terms of mutual information. In a similar work it has been assumed that X_i is a deterministic function of Y_i, however in this work we let X_i and Y_i be arbitrarily correlated. First, an upper bound on the privacy-utility trade-off is obtained by using a specific transformation, Functional Representation Lemma and Strong Functional Representaion Lemma, then we show that the upper bound can be decomposed into N parallel problems. Next, lower bounds on privacy-utility trade-off are derived using Functional Representation Lemma and Strong Functional Representaion Lemma. The upper bound is tight within a constant and the lower bounds assert that the disclosed data is independent of all {X_j}_i=1^N except one which we allocate the maximum allowed leakage to it. Finally, the obtained bounds are studied in special cases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset