TY - JOUR
T1 - Towards Efficient Training of Graph Neural Networks
T2 - A Multiscale Approach
AU - Gal, Eshed
AU - Eliasof, Moshe
AU - Schönlieb, Carola Bibiane
AU - Kyrchei, Ivan I.
AU - Haber, Eldad
AU - Treister, Eran
N1 - Publisher Copyright:
© 2025, Transactions on Machine Learning Research. All rights reserved.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - Graph Neural Networks (GNNs) have become powerful tools for learning from graphstructured data, finding applications across diverse domains. However, as graph sizes and connectivity increase, standard GNN training methods face significant computational and memory challenges, limiting their scalability and efficiency. In this paper, we present a novel framework for efficient multiscale training of GNNs. Our approach leverages hierarchical graph representations and subgraphs, enabling the integration of information across multiple scales and resolutions. By utilizing coarser graph abstractions and subgraphs, each with fewer nodes and edges, we significantly reduce computational overhead during training. Building on this framework, we propose a suite of scalable training strategies, including coarse-to-fine learning, subgraph-to-full-graph transfer, and multiscale gradient computation. We also provide some theoretical analysis of our methods and demonstrate their effectiveness across various datasets and learning tasks. Our results show that multiscale training can substantially accelerate GNN training for large-scale problems while maintaining, or even improving, predictive performance.
AB - Graph Neural Networks (GNNs) have become powerful tools for learning from graphstructured data, finding applications across diverse domains. However, as graph sizes and connectivity increase, standard GNN training methods face significant computational and memory challenges, limiting their scalability and efficiency. In this paper, we present a novel framework for efficient multiscale training of GNNs. Our approach leverages hierarchical graph representations and subgraphs, enabling the integration of information across multiple scales and resolutions. By utilizing coarser graph abstractions and subgraphs, each with fewer nodes and edges, we significantly reduce computational overhead during training. Building on this framework, we propose a suite of scalable training strategies, including coarse-to-fine learning, subgraph-to-full-graph transfer, and multiscale gradient computation. We also provide some theoretical analysis of our methods and demonstrate their effectiveness across various datasets and learning tasks. Our results show that multiscale training can substantially accelerate GNN training for large-scale problems while maintaining, or even improving, predictive performance.
UR - https://www.scopus.com/pages/publications/105025716496
M3 - Article
AN - SCOPUS:105025716496
SN - 2835-8856
VL - 2025-November
JO - Transactions on Machine Learning Research
JF - Transactions on Machine Learning Research
ER -