Defense-guided Transferable Adversarial Attacks

10/22/2020
by   Zifei Zhang, et al.
0

Though deep neural networks perform challenging tasks excellently, they are susceptible to adversarial exmaples, which mislead classifiers by applying human-imperceptible perturbations on clean inputs. Under the query-free black-box scenario, adversarial examples are hard to transfer to unknown models, and several methods have been proposed with low transferability. To settle such issue, we design a max-min framework inspired by input transformations, which are benificial to both the adversarial attack and defense. Explicitly, we decrease loss values with affline transformations as a defense in the minimum procedure, and then increase loss values with the momentum iterative algorithm as an attack in the maximum procedure. To further promote transferability, we determine transformed values with the max-min theory. Extensive experiments on Imagenet demonstrate that our defense-guided transferable attacks achieve impressive increase on transferability. Experimentally, our best black-box attack fools normally trained models at an 85.3 success rate on average, respectively. Additionally, we provide elucidative insights on the improvement of transferability, and our method is expected to be a benchmark for assessing the robustness of deep models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset