arXiv Analytics

Sign in

arXiv:2010.10711 [cs.LG]AbstractReferencesReviewsResources

On the Global Self-attention Mechanism for Graph Convolutional Networks

Chen Wang, Chengyuan Deng

Published 2020-10-21Version 1

Applying Global Self-attention (GSA) mechanism over features has achieved remarkable success on Convolutional Neural Networks (CNNs). However, it is not clear if Graph Convolutional Networks (GCNs) can similarly benefit from such a technique. In this paper, inspired by the similarity between CNNs and GCNs, we study the impact of the Global Self-attention mechanism on GCNs. We find that consistent with the intuition, the GSA mechanism allows GCNs to capture feature-based vertex relations regardless of edge connections; As a result, the GSA mechanism can introduce extra expressive power to the GCNs. Furthermore, we analyze the impacts of the GSA mechanism on the issues of overfitting and over-smoothing. We prove that the GSA mechanism can alleviate both the overfitting and the over-smoothing issues based on some recent technical developments. Experiments on multiple benchmark datasets illustrate both superior expressive power and less significant overfitting and over-smoothing problems for the GSA-augmented GCNs, which corroborate the intuitions and the theoretical results.

Related articles: Most relevant | Search more
arXiv:2006.09030 [cs.LG] (Published 2020-06-16)
Relational Fusion Networks: Graph Convolutional Networks for Road Networks
arXiv:1609.02907 [cs.LG] (Published 2016-09-09)
Semi-Supervised Classification with Graph Convolutional Networks
arXiv:1801.07606 [cs.LG] (Published 2018-01-22)
Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning