Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning

10/20/2022
by   Xiaoyi Chen, et al.
0

This paper finds that contrastive learning can produce superior sentence embeddings for pre-trained models but is also vulnerable to backdoor attacks. We present the first backdoor attack framework, BadCSE, for state-of-the-art sentence embeddings under supervised and unsupervised learning settings. The attack manipulates the construction of positive and negative pairs so that the backdoored samples have a similar embedding with the target sample (targeted attack) or the negative embedding of its clean version (non-targeted attack). By injecting the backdoor in sentence embeddings, BadCSE is resistant against downstream fine-tuning. We evaluate BadCSE on both STS tasks and other downstream tasks. The supervised non-targeted attack obtains a performance degradation of 194.86 the target embedding with a 97.70 utility.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset