Generating Optimal Privacy-Protection Mechanisms via Machine Learning

04/01/2019
by   Marco Romanelli, et al.
0

We consider the problem of obfuscating sensitive information while preserving utility. Given that an analytical solution is often not feasible because of un-scalability and because the background knowledge may be too complicated to determine, we propose an approach based on machine learning, inspired by the GANs (Generative Adversarial Networks) paradigm. The idea is to set up two nets: the generator, that tries to produce an optimal obfuscation mechanism to protect the data, and the classifier, that tries to de-obfuscate the data. By letting the two nets compete against each other, the mechanism improves its degree of protection, until an equilibrium is reached. We apply our method to the case of location privacy, and we perform experiments on synthetic data and on real data from the Gowalla dataset. We evaluate the privacy of the mechanism not only by its capacity to defeat the classificator, but also in terms of the Bayes error, which represents the strongest possible adversary. We compare the privacy-utility tradeoff of our method with that of the planar Laplace mechanism used in geo-indistinguishability, showing favorable results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset