Guided Diffusion Model for Adversarial Purification from Random Noise

06/22/2022
by   Quanlin Wu, et al.
0

In this paper, we propose a novel guided diffusion purification approach to provide a strong defense against adversarial attacks. Our model achieves 89.62 robust accuracy under PGD-L_inf attack (eps = 8/255) on the CIFAR-10 dataset. We first explore the essential correlations between unguided diffusion models and randomized smoothing, enabling us to apply the models to certified robustness. The empirical results show that our models outperform randomized smoothing by 5

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset