Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates

03/19/2020
by   Amin Ghiasi, et al.
0

To deflect adversarial attacks, a range of "certified" classifiers have been proposed. In addition to labeling an image, certified classifiers produce (when possible) a certificate guaranteeing that the input image is not an ℓ_p-bounded adversarial example. We present a new attack that exploits not only the labelling function of a classifier, but also the certificate generator. The proposed method applies large perturbations that place images far from a class boundary while maintaining the imperceptibility property of adversarial examples. The proposed "Shadow Attack" causes certifiably robust networks to mislabel an image and simultaneously produce a "spoofed" certificate of robustness.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset