Conceptor-Aided Debiasing of Contextualized Embeddings

11/20/2022
by   Yifei Li, et al.
0

Pre-trained language models reflect the inherent social biases of their training corpus. Many methods have been proposed to mitigate this issue, but they often fail to debias or they sacrifice model accuracy. We use conceptors–a soft projection method–to identify and remove the bias subspace in contextual embeddings in BERT and GPT. We propose two methods of applying conceptors (1) bias subspace projection by post-processing; and (2) a new architecture, conceptor-intervened BERT (CI-BERT), which explicitly incorporates the conceptor projection into all layers during training. We find that conceptor post-processing achieves state-of-the-art debiasing results while maintaining or improving BERT's performance on the GLUE benchmark. Although CI-BERT's training takes all layers' bias into account and can outperform its post-processing counterpart in bias mitigation, CI-BERT reduces the language model accuracy. We also show the importance of carefully constructing the bias subspace. The best results are obtained by removing outliers from the list of biased words, intersecting them (using the conceptor AND operation), and computing their embeddings using the sentences from a cleaner corpus.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset