Critical Empirical Study on Black-box Explanations in AI

09/29/2021
by   Jean-Marie John-Mathews, et al.
0

This paper provides empirical concerns about post-hoc explanations of black-box ML models, one of the major trends in AI explainability (XAI), by showing its lack of interpretability and societal consequences. Using a representative consumer panel to test our assumptions, we report three main findings. First, we show that post-hoc explanations of black-box model tend to give partial and biased information on the underlying mechanism of the algorithm and can be subject to manipulation or information withholding by diverting users' attention. Secondly, we show the importance of tested behavioral indicators, in addition to self-reported perceived indicators, to provide a more comprehensive view of the dimensions of interpretability. This paper contributes to shedding new light on the actual theoretical debate between intrinsically transparent AI models and post-hoc explanations of black-box complex models-a debate which is likely to play a highly influential role in the future development and operationalization of AI systems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset