GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner

04/10/2023
by   Zhenyu Hou, et al.
0

Graph self-supervised learning (SSL), including contrastive and generative approaches, offers great potential to address the fundamental challenge of label scarcity in real-world graph data. Among both sets of graph SSL techniques, the masked graph autoencoders (e.g., GraphMAE)–one type of generative method–have recently produced promising results. The idea behind this is to reconstruct the node features (or structures)–that are randomly masked from the input–with the autoencoder architecture. However, the performance of masked feature reconstruction naturally relies on the discriminability of the input features and is usually vulnerable to disturbance in the features. In this paper, we present a masked self-supervised learning framework GraphMAE2 with the goal of overcoming this issue. The idea is to impose regularization on feature reconstruction for graph SSL. Specifically, we design the strategies of multi-view random re-mask decoding and latent representation prediction to regularize the feature reconstruction. The multi-view random re-mask decoding is to introduce randomness into reconstruction in the feature space, while the latent representation prediction is to enforce the reconstruction in the embedding space. Extensive experiments show that GraphMAE2 can consistently generate top results on various public datasets, including at least 2.45 on ogbn-Papers100M with 111M nodes and 1.6B edges.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset