Feature Attributions and Counterfactual Explanations Can Be Manipulated

06/23/2021
by   Dylan Slack, et al.
0

As machine learning models are increasingly used in critical decision-making settings (e.g., healthcare, finance), there has been a growing emphasis on developing methods to explain model predictions. Such explanations are used to understand and establish trust in models and are vital components in machine learning pipelines. Though explanations are a critical piece in these systems, there is little understanding about how they are vulnerable to manipulation by adversaries. In this paper, we discuss how two broad classes of explanations are vulnerable to manipulation. We demonstrate how adversaries can design biased models that manipulate model agnostic feature attribution methods (e.g., LIME & SHAP) and counterfactual explanations that hill-climb during the counterfactual search (e.g., Wachter's Algorithm & DiCE) into concealing the model's biases. These vulnerabilities allow an adversary to deploy a biased model, yet explanations will not reveal this bias, thereby deceiving stakeholders into trusting the model. We evaluate the manipulations on real world data sets, including COMPAS and Communities & Crime, and find explanations can be manipulated in practice.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset