Towards Variable-Length Textual Adversarial Attacks

04/16/2021
by   Junliang Guo, et al.
0

Adversarial attacks have shown the vulnerability of machine learning models, however, it is non-trivial to conduct textual adversarial attacks on natural language processing tasks due to the discreteness of data. Most previous approaches conduct attacks with the atomic replacement operation, which usually leads to fixed-length adversarial examples and therefore limits the exploration on the decision space. In this paper, we propose variable-length textual adversarial attacks (VL-Attack) and integrate three atomic operations, namely insertion, deletion and replacement, into a unified framework, by introducing and manipulating a special blank token while attacking. In this way, our approach is able to more comprehensively find adversarial examples around the decision boundary and effectively conduct adversarial attacks. Specifically, our method drops the accuracy of IMDB classification by 96% with only editing 1.3% tokens while attacking a pre-trained BERT model. In addition, fine-tuning the victim model with generated adversarial samples can improve the robustness of the model without hurting the performance, especially for length-sensitive models. On the task of non-autoregressive machine translation, our method can achieve 33.18 BLEU score on IWSLT14 German-English translation, achieving an improvement of 1.47 over the baseline model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset