Counterfactual Risk Assessments under Unmeasured Confounding
Statistical risk assessments inform consequential decisions, such as pretrial release in criminal justice and loan approvals in consumer finance, by counterfactually predicting an outcome under a proposed decision (e.g., would the applicant default if we approved this loan?). There may, however, have been unmeasured confounders that jointly affected decisions and outcomes in the historical data. We propose a mean outcome sensitivity model that bounds the extent to which unmeasured confounders could affect outcomes on average. The mean outcome sensitivity model partially identifies the conditional likelihood of the outcome under the proposed decision, popular predictive performance metrics, and predictive disparities. We derive their identified sets and develop procedures for the confounding-robust learning and evaluation of statistical risk assessments. We propose a nonparametric regression procedure for the bounds on the conditional likelihood of the outcome under the proposed decision, and estimators for the bounds on predictive performance and disparities. Applying our methods to a real-world credit-scoring task, we show how varying assumptions on unmeasured confounding lead to substantive changes in the credit score's predictions and evaluations of its predictive disparities.
READ FULL TEXT