Learning Explainable Models Using Attribution Priors
Two important topics in deep learning both involve incorporating humans into the modeling process: Model priors transfer information from humans to a model by constraining the model's parameters; Model attributions transfer information from a model to humans by explaining the model's behavior. We propose connecting these topics with attribution priors (https://github.com/suinleelab/attributionpriors), which allow humans to use the common language of attributions to enforce prior expectations about a model's behavior during training. We develop a differentiable axiomatic feature attribution method called expected gradients and show how to directly regularize these attributions during training. We demonstrate the broad applicability of attribution priors (Ω) by presenting three distinct examples that regularize models to behave more intuitively in three different domains: 1) on image data, Ω_pixel encourages models to have piecewise smooth attribution maps; 2) on gene expression data, Ω_graph encourages models to treat functionally related genes similarly; 3) on a health care dataset, Ω_sparse encourages models to rely on fewer features. In all three domains, attribution priors produce models with more intuitive behavior and better generalization performance by encoding constraints that would otherwise be very difficult to encode using standard model priors.
READ FULL TEXT