arXiv Analytics

Sign in

arXiv:2104.04040 [cs.LG]AbstractReferencesReviewsResources

Scaling up graph homomorphism for classification via sampling

Paul Beaujean, Florian Sikora, Florian Yger

Published 2021-04-08Version 1

Feature generation is an open topic of investigation in graph machine learning. In this paper, we study the use of graph homomorphism density features as a scalable alternative to homomorphism numbers which retain similar theoretical properties and ability to take into account inductive bias. For this, we propose a high-performance implementation of a simple sampling algorithm which computes additive approximations of homomorphism densities. In the context of graph machine learning, we demonstrate in experiments that simple linear models trained on sample homomorphism densities can achieve performance comparable to graph neural networks on standard graph classification datasets. Finally, we show in experiments on synthetic data that this algorithm scales to very large graphs when implemented with Bloom filters.

Related articles: Most relevant | Search more
arXiv:2003.11702 [cs.LG] (Published 2020-03-26)
Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks
arXiv:2404.14928 [cs.LG] (Published 2024-04-23)
Graph Machine Learning in the Era of Large Language Models (LLMs)
Wenqi Fan et al.
arXiv:1905.04497 [cs.LG] (Published 2019-05-11)
Stability Properties of Graph Neural Networks