arXiv Analytics

Sign in

arXiv:1905.02850 [cs.LG]AbstractReferencesReviewsResources

Understanding attention in graph neural networks

Boris Knyazev, Graham W. Taylor, Mohamed R. Amer

Published 2019-05-08Version 1

We aim to better understand attention over nodes in graph neural networks and identify factors influencing its effectiveness. Motivated by insights from the work on Graph Isomorphism Networks (Xu et al., 2019), we design simple graph reasoning tasks that allow us to study attention in a controlled environment. We find that under typical conditions the effect of attention is negligible or even harmful, but under certain conditions it provides an exceptional gain in performance of more than 40% in some of our classification tasks. However, we have yet to satisfy these conditions in practice.

Comments: 8 pages, 2 tables, 5 figures, ICLR 2019 Workshop on Representation Learning on Graphs and Manifolds
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:1912.10206 [cs.LG] (Published 2019-12-21)
How Robust Are Graph Neural Networks to Structural Noise?
arXiv:1803.07710 [cs.LG] (Published 2018-03-21)
Inference in Probabilistic Graphical Models by Graph Neural Networks
KiJung Yoon et al.
arXiv:2002.02046 [cs.LG] (Published 2020-02-06)
Supervised Learning on Relational Databases with Graph Neural Networks