arXiv Analytics

Sign in

arXiv:2012.04930 [cs.LG]AbstractReferencesReviewsResources

Distributed Training of Graph Convolutional Networks using Subgraph Approximation

Alexandra Angerd, Keshav Balasubramanian, Murali Annavaram

Published 2020-12-09Version 1

Modern machine learning techniques are successfully being adapted to data modeled as graphs. However, many real-world graphs are typically very large and do not fit in memory, often making the problem of training machine learning models on them intractable. Distributed training has been successfully employed to alleviate memory problems and speed up training in machine learning domains in which the input data is assumed to be independently identical distributed (i.i.d). However, distributing the training of non i.i.d data such as graphs that are used as training inputs in Graph Convolutional Networks (GCNs) causes accuracy problems since information is lost at the graph partitioning boundaries. In this paper, we propose a training strategy that mitigates the lost information across multiple partitions of a graph through a subgraph approximation scheme. Our proposed approach augments each sub-graph with a small amount of edge and vertex information that is approximated from all other sub-graphs. The subgraph approximation approach helps the distributed training system converge at single-machine accuracy, while keeping the memory footprint low and minimizing synchronization overhead between the machines.

Related articles: Most relevant | Search more
arXiv:2302.00845 [cs.LG] (Published 2023-02-02)
Scale up with Order: Finding Good Data Permutations for Distributed Training
arXiv:1910.00942 [cs.LG] (Published 2019-10-02)
Keep It Simple: Graph Autoencoders Without Graph Convolutional Networks
arXiv:2208.09309 [cs.LG] (Published 2022-08-19)
Graph Convolutional Networks from the Perspective of Sheaves and the Neural Tangent Kernel