Towards Privacy Protection by Generating Adversarial Identity Masks
As billions of personal data such as photos are shared through social media and network, the privacy and security of data have drawn an increasing attention. Several attempts have been made to alleviate the leakage of identity information with the aid of image obfuscation techniques. However, most of the present results are either perceptually unsatisfactory or ineffective against real-world recognition systems. In this paper, we argue that an algorithm for privacy protection must block the ability of automatic inference of the identity and at the same time, make the resultant image natural from the users' point of view. To achieve this, we propose a targeted identity-protection iterative method (TIP-IM), which can generate natural face images by adding adversarial identity masks to conceal ones' identity against a recognition system. Extensive experiments on various state-of-the-art face recognition models demonstrate the effectiveness of our proposed method on alleviating the identity leakage of face images, without sacrificing? the visual quality of the protected images.
READ FULL TEXT