arXiv Analytics

Sign in

arXiv:2311.15317 [cs.LG]AbstractReferencesReviewsResources

Generalized Graph Prompt: Toward a Unification of Pre-Training and Downstream Tasks on Graphs

Xingtong Yu, Zhenghao Liu, Yuan Fang, Zemin Liu, Sihong Chen, Xinming Zhang

Published 2023-11-26Version 1

Graph neural networks have emerged as a powerful tool for graph representation learning, but their performance heavily relies on abundant task-specific supervision. To reduce labeling requirement, the "pre-train, prompt" paradigms have become increasingly common. However, existing study of prompting on graphs is limited, lacking a universal treatment to appeal to different downstream tasks. In this paper, we propose GraphPrompt, a novel pre-training and prompting framework on graphs. GraphPrompt not only unifies pre-training and downstream tasks into a common task template but also employs a learnable prompt to assist a downstream task in locating the most relevant knowledge from the pre-trained model in a task-specific manner. To further enhance GraphPrompt in these two stages, we extend it into GraphPrompt+ with two major enhancements. First, we generalize several popular graph pre-training tasks beyond simple link prediction to broaden the compatibility with our task template. Second, we propose a more generalized prompt design that incorporates a series of prompt vectors within every layer of the pre-trained graph encoder, in order to capitalize on the hierarchical information across different layers beyond just the readout layer. Finally, we conduct extensive experiments on five public datasets to evaluate and analyze GraphPrompt and GraphPrompt+.

Comments: 28 pages. Under review. arXiv admin note: substantial text overlap with arXiv:2302.08043
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:2411.12600 [cs.LG] (Published 2024-11-19)
Provable unlearning in topic modeling and downstream tasks
arXiv:2205.08377 [cs.LG] (Published 2022-05-09)
Should attention be all we need? The epistemic and ethical implications of unification in machine learning
arXiv:2205.13147 [cs.LG] (Published 2022-05-26)
Matryoshka Representations for Adaptive Deployment