Task and Model Agnostic Adversarial Attack on Graph Neural Networks

12/25/2021
by   Kartik Sharma, et al.
15

Graph neural networks (GNNs) have witnessed significant adoption in the industry owing to impressive performance on various predictive tasks. Performance alone, however, is not enough. Any widely deployed machine learning algorithm must be robust to adversarial attacks. In this work, we investigate this aspect for GNNs, identify vulnerabilities, and link them to graph properties that may potentially lead to the development of more secure and robust GNNs. Specifically, we formulate the problem of task and model agnostic evasion attacks where adversaries modify the test graph to affect the performance of any unknown downstream task. The proposed algorithm, GRAND (Graph Attack via Neighborhood Distortion) shows that distortion of node neighborhoods is effective in drastically compromising prediction performance. Although neighborhood distortion is an NP-hard problem, GRAND designs an effective heuristic through a novel combination of Graph Isomorphism Network with deep Q-learning. Extensive experiments on real datasets show that, on average, GRAND is up to 50% more effective than state of the art techniques, while being more than 100 times faster.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset