Siamese Generative Adversarial Privatizer for Biometric Data

04/23/2018
by   Witold Oleszkiewicz, et al.
0

State-of-the-art machine learning algorithms can be fooled by carefully crafted adversarial examples. As such, adversarial examples present a concrete problem in AI safety. In this work we turn the tables and ask the following question: can we harness the power of adversarial examples to prevent malicious adversaries from learning sensitive information while allowing non-malicious entities to fully benefit from the utility of released datasets? To answer this question, we propose a novel Siamese Generative Adversarial Privatizer that exploits the properties of a Siamese neural network in order to find discriminative features that convey private information. When coupled with a generative adversarial network, our model is able to correctly locate and disguise sensitive information, while minimal distortion constraint prohibits the network from reducing the utility of the resulting dataset. Our method shows promising results on a biometric dataset of fingerprints.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset