Revisiting Embeddings for Graph Neural Networks
Current graph representation learning techniques use Graph Neural Networks (GNNs) to extract features from dataset embeddings. In this work, we examine the quality of these embeddings and assess how changing them can affect the accuracy of GNNs. We explore different embedding extraction techniques for both images and texts. We find that the choice of embedding biases the performance of different GNN architectures and thus the choice of embedding influences the selection of GNNs regardless of the underlying dataset. In addition, we only see an improvement in accuracy from some GNN models compared to the accuracy of models trained from scratch or fine-tuned on the underlying data without utilizing the graph connections. As an alternative, we propose Graph-connected Network (GraNet) layers which use GNN message passing within large models to allow neighborhood aggregation. This gives a chance for the model to inherit weights from large pre-trained models if possible and we demonstrate that this approach improves the accuracy compared to the previous methods: on Flickr_v2, GraNet beats GAT2 and GraphSAGE by 7.7
READ FULL TEXT