Generative Model-Based Attack on Learnable Image Encryption for Privacy-Preserving Deep Learning

03/09/2023
by   AprilPyone MaungMaung, et al.
0

In this paper, we propose a novel generative model-based attack on learnable image encryption methods proposed for privacy-preserving deep learning. Various learnable encryption methods have been studied to protect the sensitive visual information of plain images, and some of them have been investigated to be robust enough against all existing attacks. However, previous attacks on image encryption focus only on traditional cryptanalytic attacks or reverse translation models, so these attacks cannot recover any visual information if a block-scrambling encryption step, which effectively destroys global information, is applied. Accordingly, in this paper, generative models are explored to evaluate whether such models can restore sensitive visual information from encrypted images for the first time. We first point out that encrypted images have some similarity with plain images in the embedding space. By taking advantage of leaked information from encrypted images, we propose a guided generative model as an attack on learnable image encryption to recover personally identifiable visual information. We implement the proposed attack in two ways by utilizing two state-of-the-art generative models: a StyleGAN-based model and latent diffusion-based one. Experiments were carried out on the CelebA-HQ and ImageNet datasets. Results show that images reconstructed by the proposed method have perceptual similarities to plain images.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset