Generative Counterfactual Introspection for Explainable Deep Learning

07/06/2019
by   Shusen Liu, et al.
7

In this work, we propose an introspection technique for deep neural networks that relies on a generative model to instigate salient editing of the input image for model interpretation. Such modification provides the fundamental interventional operation that allows us to obtain answers to counterfactual inquiries, i.e., what meaningful change can be made to the input image in order to alter the prediction. We demonstrate how to reveal interesting properties of the given classifiers by utilizing the proposed introspection approach on both the MNIST and the CelebA dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset