Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

06/28/2021
by   Oliver Bryniarski, et al.
12

Evading adversarial example detection defenses requires finding adversarial examples that must simultaneously (a) be misclassified by the model and (b) be detected as non-adversarial. We find that existing attacks that attempt to satisfy multiple simultaneous constraints often over-optimize against one constraint at the cost of satisfying another. We introduce Orthogonal Projected Gradient Descent, an improved attack technique to generate adversarial examples that avoids this problem by orthogonalizing the gradients when running standard gradient-based attacks. We use our technique to evade four state-of-the-art detection defenses, reducing their accuracy to 0 detection rate.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset