On the Privacy-Utility Trade-off With and Without Direct Access to the Private Data
We study an information theoretic privacy mechanism design problem for two scenarios where the private data is either observable or hidden. In each scenario, we first consider bounded mutual information as privacy leakage criterion, then we use two different per-letter privacy constraints. In the first scenario, an agent observes useful data Y that is correlated with private data X, and wishes to disclose the useful information to a user. A privacy mechanism is designed to generate disclosed data U which maximizes the revealed information about Y while satisfying a bounded privacy leakage constraint. In the second scenario, the agent has additionally access to the private data. To this end, we first extend the Functional Representation Lemma and Strong Functional Representation Lemma by relaxing the independence condition and thereby allowing a certain leakage to find lower bounds for the second scenario with different privacy leakage constraints. Furthermore, upper and lower bounds are derived in the first scenario considering different privacy constraints. In particular, for the case where no leakage is allowed, our upper and lower bounds improve previous bounds. Moreover, considering bounded mutual information as privacy constraint we show that if the common information and mutual information between X and Y are equal, then the attained upper bound in the second scenario is tight. Finally, the privacy-utility trade-off with prioritized private data is studied where part of X, i.e., X_1, is more private than the remaining part, i.e., X_2, and we provide lower and upper bounds.
READ FULL TEXT