arXiv Analytics

Sign in

arXiv:2310.00183 [cs.LG]AbstractReferencesReviewsResources

On the Equivalence of Graph Convolution and Mixup

Xiaotian Han, Hanqing Zeng, Yu Chen, Shaoliang Nie, Jingzhou Liu, Kanika Narang, Zahra Shakeri, Karthik Abinav Sankararaman, Song Jiang, Madian Khabsa, Qifan Wang, Xia Hu

Published 2023-09-29Version 1

This paper investigates the relationship between graph convolution and Mixup techniques. Graph convolution in a graph neural network involves aggregating features from neighboring samples to learn representative features for a specific node or sample. On the other hand, Mixup is a data augmentation technique that generates new examples by averaging features and one-hot labels from multiple samples. One commonality between these techniques is their utilization of information from multiple samples to derive feature representation. This study aims to explore whether a connection exists between these two approaches. Our investigation reveals that, under two mild conditions, graph convolution can be viewed as a specialized form of Mixup that is applied during both the training and testing phases. The two conditions are: 1) \textit{Homophily Relabel} - assigning the target node's label to all its neighbors, and 2) \textit{Test-Time Mixup} - Mixup the feature during the test time. We establish this equivalence mathematically by demonstrating that graph convolution networks (GCN) and simplified graph convolution (SGC) can be expressed as a form of Mixup. We also empirically verify the equivalence by training an MLP using the two conditions to achieve comparable performance.

Related articles: Most relevant | Search more
arXiv:1809.02721 [cs.LG] (Published 2018-09-08)
Learning to Solve NP-Complete Problems - A Graph Neural Network for the Decision TSP
arXiv:1909.11715 [cs.LG] (Published 2019-09-25)
GraphMix: Regularized Training of Graph Neural Networks for Semi-Supervised Learning
arXiv:1905.06707 [cs.LG] (Published 2019-05-16)
Inferring Javascript types using Graph Neural Networks