Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making

09/21/2021
by   Yi Yang, et al.
0

Scoring systems, as simple classification models, have significant advantages in interpretability and transparency when making predictions. They facilitate humans' decision-making by allowing them to make a quick prediction by hand through adding and subtracting a few point scores and thus have been widely used in various fields such as medical diagnosis of Intensive Care Units. However, (un)fairness issues in these models have long been criticized, and the use of biased data in the construction of score systems heightens this concern. In this paper, we propose a general framework to create data-driven fairness-aware scoring systems. Our approach is first to develop a social welfare function that incorporates both efficiency and equity. Then, we translate the social welfare maximization problem in economics into the empirical risk minimization task of the machine learning community to derive a fairness-aware scoring system with the help of mixed integer programming. We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements and also allows them to customize their own requirements by imposing various operational constraints. Experimental evidence on several real data sets verifies that the proposed scoring system can achieve the optimal welfare of stakeholders and balance the interpretability, fairness, and efficiency issues.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset