Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization

04/10/2019
by   Nina Schaaf, et al.
0

One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using -orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that -orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset