Boosting the Transferability of Adversarial Attacks with Global Momentum Initialization

11/21/2022
by   Jiafeng Wang, et al.
0

Deep neural networks are vulnerable to adversarial examples, which attach human invisible perturbations to benign inputs. Simultaneously, adversarial examples exhibit transferability under different models, which makes practical black-box attacks feasible. However, existing methods are still incapable of achieving desired transfer attack performance. In this work, from the perspective of gradient optimization and consistency, we analyze and discover the gradient elimination phenomenon as well as the local momentum optimum dilemma. To tackle these issues, we propose Global Momentum Initialization (GI) to suppress gradient elimination and help search for the global optimum. Specifically, we perform gradient pre-convergence before the attack and carry out a global search during the pre-convergence stage. Our method can be easily combined with almost all existing transfer methods, and we improve the success rate of transfer attacks significantly by an average of 6.4 advanced defense mechanisms compared to state-of-the-art methods. Eventually, we achieve an attack success rate of 95.4 of existing defense mechanisms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset