Robust Optimization for Fairness with Noisy Protected Groups

02/21/2020
by   Serena Wang, et al.
6

Many existing fairness criteria for machine learning involve equalizing or achieving some metric across protected groups such as race or gender groups. However, practitioners trying to audit or enforce such group-based criteria can easily face the problem of noisy or biased protected group information. We study this important practical problem in two ways. First, we study the consequences of naïvely only relying on noisy protected groups: we provide an upper bound on the fairness violations on the true groups G when the fairness criteria are satisfied on noisy groups Ĝ. Second, we introduce two new approaches using robust optimization that, unlike the naïve approach of only relying on Ĝ, are guaranteed to satisfy fairness criteria on the true protected groups G while minimizing a training objective. We provide theoretical guarantees that one such approach converges to an optimal feasible solution. Using two case studies, we empirically show that the robust approaches achieve better true group fairness guarantees than the naïve approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset