DOTIN: Dropping Task-Irrelevant Nodes for GNNs

04/28/2022
by   Shaofeng Zhang, et al.
0

Scalability is an important consideration for deep graph neural networks. Inspired by the conventional pooling layers in CNNs, many recent graph learning approaches have introduced the pooling strategy to reduce the size of graphs for learning, such that the scalability and efficiency can be improved. However, these pooling-based methods are mainly tailored to a single graph-level task and pay more attention to local information, limiting their performance in multi-task settings which often require task-specific global information. In this paper, departure from these pooling-based efforts, we design a new approach called DOTIN (Dropping Task-Irrelevant Nodes) to reduce the size of graphs. Specifically, by introducing K learnable virtual nodes to represent the graph embeddings targeted to K different graph-level tasks, respectively, up to 90% raw nodes with low attentiveness with an attention model – a transformer in this paper, can be adaptively dropped without notable performance decreasing. Achieving almost the same accuracy, our method speeds up GAT by about 50% on graph-level tasks including graph classification and graph edit distance (GED) with about 60% less memory, on D&D dataset. Code will be made publicly available in https://github.com/Sherrylone/DOTIN.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset