Towards Efficient Training of Graph Neural Networks: A Multiscale Approach

Research output: Contribution to journalArticlepeer-review

Abstract

Graph Neural Networks (GNNs) have become powerful tools for learning from graphstructured data, finding applications across diverse domains. However, as graph sizes and connectivity increase, standard GNN training methods face significant computational and memory challenges, limiting their scalability and efficiency. In this paper, we present a novel framework for efficient multiscale training of GNNs. Our approach leverages hierarchical graph representations and subgraphs, enabling the integration of information across multiple scales and resolutions. By utilizing coarser graph abstractions and subgraphs, each with fewer nodes and edges, we significantly reduce computational overhead during training. Building on this framework, we propose a suite of scalable training strategies, including coarse-to-fine learning, subgraph-to-full-graph transfer, and multiscale gradient computation. We also provide some theoretical analysis of our methods and demonstrate their effectiveness across various datasets and learning tasks. Our results show that multiscale training can substantially accelerate GNN training for large-scale problems while maintaining, or even improving, predictive performance.

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2025-November
StatePublished - 1 Jan 2025

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Towards Efficient Training of Graph Neural Networks: A Multiscale Approach'. Together they form a unique fingerprint.

Cite this