Single-node attacks for fooling graph neural networks

Ben Finkelshtein, Chaim Baskin, Evgenii Zheltonozhskii, Uri Alon

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Graph neural networks (GNNs) have shown broad applicability in a variety of domains. These domains, e.g., social networks and product recommendations, are fertile ground for malicious users and behavior. In this paper, we show that GNNs are vulnerable to the extremely limited (and thus quite realistic) scenarios of a single-node adversarial attack, where the perturbed node cannot be chosen by the attacker. That is, an attacker can force the GNN to classify any target node to a chosen label, by only slightly perturbing the features or the neighbors list of another single arbitrary node in the graph, even when not being able to select that specific attacker node. When the adversary is allowed to select the attacker node, these attacks are even more effective. We demonstrate empirically that our attack is effective across various common GNN types (e.g., GCN, GraphSAGE, GAT, GIN) and robustly optimized GNNs (e.g., Robust GCN, SM GCN, GAL, LAT-GCN), outperforming previous attacks across different real-world datasets both in a targeted and non-targeted attacks. Our code is available anonymously at https://github.com/gnnattack/SINGLE.

Original languageEnglish
Pages (from-to)1-12
Number of pages12
JournalNeurocomputing
Volume513
DOIs
StatePublished - 7 Nov 2022
Externally publishedYes

Keywords

  • Adversarial robustness
  • Graph neural networks
  • Node classification

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Single-node attacks for fooling graph neural networks'. Together they form a unique fingerprint.

Cite this