arXiv Analytics

Sign in

arXiv:2405.16224 [cs.LG]AbstractReferencesReviewsResources

Negative as Positive: Enhancing Out-of-distribution Generalization for Graph Contrastive Learning

Zixu Wang, Bingbing Xu, Yige Yuan, Huawei Shen, Xueqi Cheng

Published 2024-05-25Version 1

Graph contrastive learning (GCL), standing as the dominant paradigm in the realm of graph pre-training, has yielded considerable progress. Nonetheless, its capacity for out-of-distribution (OOD) generalization has been relatively underexplored. In this work, we point out that the traditional optimization of InfoNCE in GCL restricts the cross-domain pairs only to be negative samples, which inevitably enlarges the distribution gap between different domains. This violates the requirement of domain invariance under OOD scenario and consequently impairs the model's OOD generalization performance. To address this issue, we propose a novel strategy "Negative as Positive", where the most semantically similar cross-domain negative pairs are treated as positive during GCL. Our experimental results, spanning a wide array of datasets, confirm that this method substantially improves the OOD generalization performance of GCL.

Comments: 5 pages, 5 figures, In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '24), July 14-18, 2024, Washington, DC, USA
Categories: cs.LG, cs.AI
Subjects: I.2
Related articles: Most relevant | Search more
arXiv:2304.10045 [cs.LG] (Published 2023-04-20)
ID-MixGCL: Identity Mixup for Graph Contrastive Learning
arXiv:2010.13902 [cs.LG] (Published 2020-10-22)
Graph Contrastive Learning with Augmentations
arXiv:2410.20356 [cs.LG] (Published 2024-10-27)
Uncovering Capabilities of Model Pruning in Graph Contrastive Learning