Detection of Adversarial Supports in Few-Shot Classifiers Using Self-Similarity and Filtering

Yi Xiang Marcus Tan, Penny Chong, Jiamei Sun, Ngai Man Cheung, Yuval Elovici, Alexander Binder

Research output: Contribution to journalConference articlepeer-review

14 Downloads (Pure)


Few-shot classifiers excel under limited training samples, making them useful in applications with sparsely user-provided labels. Their unique relative prediction setup offers opportunities for novel attacks, such as targeting support sets required to categorise unseen test samples, which are not available in other machine learning setups. In this work, we propose a detection strategy to identify adversarial support sets, aimed at destroying the understanding of a few-shot classifier for a certain class. We achieve this by introducing the concept of self-similarity of a support set and by employing filtering of supports. Our method is attack-agnostic, and we are the first to explore adversarial detection for support sets of few-shot classifiers to the best of our knowledge. Our evaluation of the miniImagenet (MI) and CUB datasets exhibits good attack detection performance despite conceptual simplicity, showing high AUROC scores. We show that self-similarity and filtering for adversarial detection can be paired with other filtering functions, constituting a generalisable concept.

Original languageEnglish
JournalCEUR Workshop Proceedings
StatePublished - 1 Jan 2021
Event2021 International Workshop on Safety and Security of Deep Learning, SSDL 2021 - Virtual, Online
Duration: 19 Aug 2021 → …


  • adversarial defence
  • adversarial machine learning
  • detection
  • few-shot
  • filtering
  • self-similarity

ASJC Scopus subject areas

  • General Computer Science


Dive into the research topics of 'Detection of Adversarial Supports in Few-Shot Classifiers Using Self-Similarity and Filtering'. Together they form a unique fingerprint.

Cite this