Universal Adversarial Attack Against Speaker Recognition Models

Shoham Hanina, Alon Zolfi, Yuval Elovici, Asaf Shabtai

Research output: Contribution to journalConference articlepeer-review

Abstract

In recent years, deep learning-based speaker recognition (SR) models have received a large amount of attention from the machine learning (ML) community. Their increasing popularity derives in large part from their effectiveness in identifying speakers in many security-sensitive applications. Researchers have attempted to challenge the robustness of SR models, and they have revealed the models' vulnerability to adversarial ML attacks. However, the studies performed mainly proposed tailor-made perturbations that are only effective for the speakers they were trained on (i.e., a closed-set). In this paper, we propose the Anonymous Speakers attack, a universal adversarial perturbation that fools SR models on all speakers in an open-set environment, i.e., including speakers that were not part of the training phase of the attack. Using a custom optimization process, we craft a single perturbation that can be applied to the original recording of any speaker and results in misclassification by the SR model. We examined the attack's effectiveness on various state-of-the-art SR models with a wide range of speaker identities. The results of our experiments show that our attack largely reduces the embeddings' similarity to the speaker's original embedding representation while maintaining a high signal-to-noise ratio value.

Original languageEnglish
Pages (from-to)4860-4864
Number of pages5
JournalProceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
DOIs
StatePublished - 1 Jan 2024
Event2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Seoul, Korea, Republic of
Duration: 14 Apr 202419 Apr 2024

Keywords

  • Adversarial Attack
  • Speaker Recognition

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Universal Adversarial Attack Against Speaker Recognition Models'. Together they form a unique fingerprint.

Cite this