arXiv:1905.13686 [cs.LG]AbstractReferencesReviewsResources
Explainability Techniques for Graph Convolutional Networks
Federico Baldassarre, Hossein Azizpour
Published 2019-05-31Version 1
Graph Networks are used to make decisions in potentially complex scenarios but it is usually not obvious how or why they made them. In this work, we study the explainability of Graph Network decisions using two main classes of techniques, gradient-based and decomposition-based, on a toy dataset and a chemistry task. Our study sets the ground for future development as well as application to real-world problems.
Comments: Accepted at the ICML 2019 Workshop "Learning and Reasoning with Graph-Structured Representations" (poster + spotlight talk)
Related articles: Most relevant | Search more
arXiv:2003.13606 [cs.LG] (Published 2020-03-30)
L^2-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks
arXiv:1904.00326 [cs.LG] (Published 2019-03-31)
MedGCN: Graph Convolutional Networks for Multiple Medical Tasks
arXiv:1805.01837 [cs.LG] (Published 2018-05-04)
Towards a Spectrum of Graph Convolutional Networks