arXiv Analytics

Sign in

arXiv:1910.03081 [cs.LG]AbstractReferencesReviewsResources

On the Interpretability and Evaluation of Graph Representation Learning

Antonia Gogoglou, C. Bayan Bruss, Keegan E. Hines

Published 2019-10-07Version 1

With the rising interest in graph representation learning, a variety of approaches have been proposed to effectively capture a graph's properties. While these approaches have improved performance in graph machine learning tasks compared to traditional graph techniques, they are still perceived as techniques with limited insight into the information encoded in these representations. In this work, we explore methods to interpret node embeddings and propose the creation of a robust evaluation framework for comparing graph representation learning algorithms and hyperparameters. We test our methods on graphs with different properties and investigate the relationship between embedding training parameters and the ability of the produced embedding to recover the structure of the original graph in a downstream task.

Related articles: Most relevant | Search more
arXiv:1811.10469 [cs.LG] (Published 2018-11-21)
How to improve the interpretability of kernel learning
arXiv:2001.02522 [cs.LG] (Published 2020-01-08)
On Interpretability of Artificial Neural Networks
arXiv:2312.16191 [cs.LG] (Published 2023-12-22)
SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning