Explaining a Series of Models by Propagating Local Feature Attributions
Pipelines involving a series of several machine learning models (e.g., stacked generalization ensembles, neural network feature extractors) improve performance in many domains but are difficult to understand. To improve their transparency, we introduce a framework to propagate local feature attributions through complex pipelines of models based on a connection to the Shapley value. Our framework enables us to (1) draw higher-level conclusions based on groups of gene expression features for Alzheimer's and breast cancer histologic grade prediction, (2) draw important insights about the errors a mortality prediction model makes by explaining a loss that is a non-linear transformation of the model's output, (3) explain pipelines of deep feature extractors fed into a tree model for MNIST digit classification, and (4) interpret important consumer scores and raw features in a stacked generalization setting to predict risk for home equity line of credit applications. Importantly, in the consumer scoring example, DeepSHAP is the only feature attribution technique we are aware of that allows independent entities (e.g., lending institutions, credit bureaus) to compute attributions for the original features without having to share their proprietary models. Quantitatively comparing our framework to model-agnostic approaches, we show that our approach is an order of magnitude faster while providing equally salient explanations. In addition, we describe how to incorporate an empirical baseline distribution, which allows us to (1) demonstrate the bias of previous approaches that use a single baseline sample, and (2) present a straightforward methodology for choosing meaningful baseline distributions.
READ FULL TEXT