Towards Graph Foundation Models: A Study on the Generalization of Positional and Structural Encodings

Billy Joe Franks, Moshe Eliasof, Semih Cantürk, Guy Wolf, Carola Bibiane Schönlieb, Sophie Fellenz, Marius Kloft

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advances in integrating positional and structural encodings (PSEs) into graph neural networks (GNNs) have significantly enhanced their performance across various graph learning tasks. However, the general applicability of these encodings and their potential to serve as foundational representations for graphs remain uncertain. This paper investigates the fine-tuning efficiency, scalability with sample size, and generalization capability of learnable PSEs across diverse graph datasets. Specifically, we evaluate their potential as universal pretrained models that can be easily adapted to new tasks with minimal fine-tuning and limited data. Furthermore, we assess the expressivity of the learned representations, particularly, when used to augment downstream GNNs. We demonstrate through extensive benchmarking and empirical analysis that PSEs generally enhance downstream models. However, some datasets may require specific PSE-augmentations to achieve optimal performance. Nevertheless, our findings highlight their significant potential to become integral components of future graph foundation models. We provide new insights into the strengths and limitations of PSEs, contributing to the broader discourse on foundation models in graph learning.

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2025-February
StatePublished - 1 Jan 2025
Externally publishedYes

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Towards Graph Foundation Models: A Study on the Generalization of Positional and Structural Encodings'. Together they form a unique fingerprint.

Cite this