Improving Graph Neural Networks with Learnable Propagation Operators

Moshe Eliasof, Lars Ruthotto, Eran Treister

Research output: Contribution to journalConference articlepeer-review

9 Scopus citations

Abstract

Graph Neural Networks (GNNs) are limited in their propagation operators. In many cases, these operators often contain non-negative elements only and are shared across channels, limiting the expressiveness of GNNs. Moreover, some GNNs suffer from over-smoothing, limiting their depth. On the other hand, Convolutional Neural Networks (CNNs) can learn diverse propagation filters, and phenomena like over-smoothing are typically not apparent in CNNs. In this paper, we bridge these gaps by incorporating trainable channel-wise weighting factors ω to learn and mix multiple smoothing and sharpening propagation operators at each layer. Our generic method is called ωGNN, and is easy to implement. We study two variants: ωGCN and ωGAT. For ωGCN, we theoretically analyse its behaviour and the impact of ω on the obtained node features. Our experiments confirm these findings, demonstrating and explaining how both variants do not over-smooth. Additionally, we experiment with 15 real-world datasets on node- and graph-classification tasks, where our ωGCN and ωGAT perform on par with state-of-the-art methods.

Original languageEnglish
Pages (from-to)9224-9245
Number of pages22
JournalProceedings of Machine Learning Research
Volume202
StatePublished - 1 Jan 2023
Event40th International Conference on Machine Learning, ICML 2023 - Honolulu, United States
Duration: 23 Jul 202329 Jul 2023

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Improving Graph Neural Networks with Learnable Propagation Operators'. Together they form a unique fingerprint.

Cite this