Every Node Counts: Improving the Training of Graph Neural Networks on Node Classification.

Moshe Eliasof, Eldad Haber, Eran Treister

Research output: Working paper/PreprintPreprint


Graph Neural Networks (GNNs) are prominent in handling sparse and unstructured data efficiently and effectively. Specifically, GNNs were shown to be highly effective for node classification tasks, where labelled information is available for only a fraction of the nodes. Typically, the optimization process, through the objective function, considers only labelled nodes while ignoring the rest. In this paper, we propose novel objective terms for the training of GNNs for node classification, aiming to exploit all the available data and improve accuracy. Our first term seeks to maximize the mutual information between node and label features, considering both labelled and unlabelled nodes in the optimization process. Our second term promotes anisotropic smoothness in the prediction maps. Lastly, we propose a cross-validating gradients approach to enhance the learning from labelled data. Our proposed objectives are general and can be applied to various GNNs and require no architectural modifications. Extensive experiments demonstrate our approach using popular GNNs like GCN, GAT and GCNII, reading a consistent and significant accuracy improvement on 10 real-world node classification datasets.
Original languageEnglish
StatePublished - 29 Nov 2022


Dive into the research topics of 'Every Node Counts: Improving the Training of Graph Neural Networks on Node Classification.'. Together they form a unique fingerprint.

Cite this