A minimax framework for quantifying risk-fairness trade-off in regression
We propose a theoretical framework for the problem of learning a real-valued function which meets fairness requirements. This framework is built upon the notion of α-relative (fairness) improvement of the regression function which we introduce using the theory of optimal transport. Setting α = 0 corresponds to the regression problem under the Demographic Parity constraint, while α = 1 corresponds to the classical regression problem without any constraints. For α∈ (0, 1) the proposed framework allows to continuously interpolate between these two extreme cases and to study partially fair predictors. Within this framework we precisely quantify the cost in risk induced by the introduction of the fairness constraint. We put forward a statistical minimax setup and derive a general problem-dependent lower bound on the risk of any estimator satisfying α-relative improvement constraint. We illustrate our framework on a model of linear regression with Gaussian design and systematic group-dependent bias, deriving matching (up to absolute constants) upper and lower bounds on the minimax risk under the introduced constraint. Finally, we perform a simulation study of the latter setup.
READ FULL TEXT