arXiv Analytics

Sign in

arXiv:1910.00942 [cs.LG]AbstractReferencesReviewsResources

Keep It Simple: Graph Autoencoders Without Graph Convolutional Networks

Guillaume Salha, Romain Hennequin, Michalis Vazirgiannis

Published 2019-10-02Version 1

Graph autoencoders (AE) and variational autoencoders (VAE) recently emerged as powerful node embedding methods, with promising performances on challenging tasks such as link prediction and node clustering. Graph AE, VAE and most of their extensions rely on graph convolutional networks (GCN) to learn vector space representations of nodes. In this paper, we propose to replace the GCN encoder by a simple linear model w.r.t. the adjacency matrix of the graph. For the two aforementioned tasks, we empirically show that this approach consistently reaches competitive performances w.r.t. GCN-based models for numerous real-world graphs, including the widely used Cora, Citeseer and Pubmed citation networks that became the de facto benchmark datasets for evaluating graph AE and VAE. This result questions the relevance of repeatedly using these three datasets to compare complex graph AE and VAE models. It also emphasizes the effectiveness of simple node encoding schemes for many real-world applications.

Comments: NeurIPS 2019 Graph Representation Learning Workshop
Categories: cs.LG, cs.SI, stat.ML
Related articles: Most relevant | Search more
arXiv:2005.04081 [cs.LG] (Published 2020-05-08)
Geometric graphs from data to aid classification tasks with graph convolutional networks
arXiv:1902.09817 [cs.LG] (Published 2019-02-26)
GCN-LASE: Towards Adequately Incorporating Link Attributes in Graph Convolutional Networks
arXiv:1801.07606 [cs.LG] (Published 2018-01-22)
Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning