Investigating Bias in Image Classification using Model Explanations

12/10/2020
by   Schrasing Tong, et al.
0

We evaluated whether model explanations could efficiently detect bias in image classification by highlighting discriminating features, thereby removing the reliance on sensitive attributes for fairness calculations. To this end, we formulated important characteristics for bias detection and observed how explanations change as the degree of bias in models change. The paper identifies strengths and best practices for detecting bias using explanations, as well as three main weaknesses: explanations poorly estimate the degree of bias, could potentially introduce additional bias into the analysis, and are sometimes inefficient in terms of human effort involved.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset