Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions
When the average performance of a prediction model varies significantly with respect to a sensitive attribute (e.g., race or gender), the performance disparity can be expressed in terms of the probability distributions of input and output variables for each sensitive group. In this paper, we exploit this fact to explain and repair the performance disparity of a fixed classification model over a population of interest. Given a black-box classifier that performs unevenly across sensitive groups, we aim to eliminate the performance gap by perturbing the distribution of input features for the disadvantaged group. We refer to the perturbed distribution as a counterfactual distribution, and characterize its properties for popular fairness criteria (e.g., predictive parity, equal FPR, equal opportunity). We then design a descent algorithm to efficiently learn a counterfactual distribution given the black-box classifier and samples drawn from the underlying population. We use the estimated counterfactual distribution to build a data preprocessor that reduces disparate impact without training a new model. We illustrate these use cases through experiments on real-world datasets, showing that we can repair different kinds of disparate impact without a large drop in accuracy.
READ FULL TEXT