CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

02/05/2021
by   Ana Lucic, et al.
11

Graph neural networks (GNNs) have shown increasing promise in real-world applications, which has caused an increased interest in understanding their predictions. However, existing methods for explaining predictions from GNNs do not provide an opportunity for recourse: given a prediction for a particular instance, we want to understand how the prediction can be changed. We propose CF-GNNExplainer: the first method for generating counterfactual explanations for GNNs, i.e., the minimal perturbations to the input graph data such that the prediction changes. Using only edge deletions, we find that we are able to generate counterfactual examples for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94 primarily removes edges that are crucial for the original predictions, resulting in minimal counterfactual examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset