The Grammar of Interactive Explanatory Model Analysis

05/01/2020
by   Hubert Baniecki, et al.
0

When analysing a complex system, very often an answer for one question raises new questions. The same law applies to the analysis of Machine Learning (ML) models. One method to explain the model is not enough because different questions and different stakeholders need different approaches. Most of the proposed methods for eXplainable Artificial Intelligence (XAI) focus on a single aspect of model behaviour. However, we cannot sufficiently explain a complex model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, which inevitably leads to wrong reasoning. In this paper, we present the problem of model explainability as an interactive and sequential explanatory analysis of a model (IEMA). We introduce the grammar of such interactive explanations. We show how different XAI methods complement each other and why it is essential to juxtapose them together. We argue that without multi-faceted interactive explanation, there will be no understanding nor trust for models. The proposed process derives from the theoretical, algorithmic side of the model explanation and aims to embrace ideas learned through research in cognitive sciences. Its grammar is implemented in the modelStudio framework that adopts interactivity, automation and customisablity as its main traits. This thoughtful design addresses the needs of multiple diverse stakeholders, not only ML practitioners.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset