Explaining Neural Networks without Access to Training Data

06/10/2022
by   Sascha Marton, et al.
0

We consider generating explanations for neural networks in cases where the network's training data is not accessible, for instance due to privacy or safety issues. Recently, ℐ-Nets have been proposed as a sample-free approach to post-hoc, global model interpretability that does not require access to training data. They formulate interpretation as a machine learning task that maps network representations (parameters) to a representation of an interpretable function. In this paper, we extend the ℐ-Net framework to the cases of standard and soft decision trees as surrogate models. We propose a suitable decision tree representation and design of the corresponding ℐ-Net output layers. Furthermore, we make ℐ-Nets applicable to real-world tasks by considering more realistic distributions when generating the ℐ-Net's training data. We empirically evaluate our approach against traditional global, post-hoc interpretability approaches and show that it achieves superior results when the training data is not accessible.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset