Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

06/20/2018
by   Richard Tomsett, et al.
0

Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable. We describe a model intended to help answer this question, by identifying different roles that agents can fulfill in relation to the machine learning system. We illustrate the use of our model in a variety of scenarios, exploring how an agent's role influences its goals, and the implications for defining interpretability. Finally, we make suggestions for how our model could be useful to interpretability researchers, system developers, and regulatory bodies auditing machine learning systems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset