Linear Adversarial Concept Erasure

01/28/2022
by   Shauli Ravfogel, et al.
1

Modern neural models trained on textual data rely on pre-trained representations that emerge without direct supervision. As these representations are increasingly being used in real-world applications, the inability to control their content becomes an increasingly important problem. We formulate the problem of identifying and erasing a linear subspace that corresponds to a given concept, in order to prevent linear predictors from recovering the concept. We model this problem as a constrained, linear minimax game, and show that existing solutions are generally not optimal for this task. We derive a closed-form solution for certain objectives, and propose a convex relaxation, R-LACE, that works well for others. When evaluated in the context of binary gender removal, the method recovers a low-dimensional subspace whose removal mitigates bias by intrinsic and extrinsic evaluation. We show that the method – despite being linear – is highly expressive, effectively mitigating bias in deep nonlinear classifiers while maintaining tractability and interpretability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset