On the Interpretable Adversarial Sensitivity of Iterative Optimizers

Elad Sofer, Nir Shlezinger

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Adversarial examples are an emerging threat of machine learning (ML) models, allowing adversaries to substantially deteriorate performance by introducing seemingly unnoticeable perturbations. These attacks are typically considered to be an ML risk, often associated with the black-box operation and sensitivity to features learned from data of deep neural networkss (DNNs), and are rarely viewed as a threat to classic non-learned decision rules, such as iterative optimizers. In this work we explore the sensitivity to adversarial examples of iterative optimizers, building upon recent advances in treating these methods as ML models. We identify that many iterative optimizers share the properties of end-to-end differentiability and existence of impactful small perturbations, that make them amenable to adversarial attacks. The interpretablity of iterative optimizers allows to associate adversarial examples with modifications to the traversed loss surface that notably affect the location of the sought minima. We visualize this effect and demonstrate the vulnerability of iterative optimizers for compressed sensing and hybrid beamforming tasks, showing that different optimizers tackling the same optimization formulation vary in their adversarial sensitivity.

Original languageEnglish
Title of host publicationProceedings of the 2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing, MLSP 2023
EditorsDanilo Comminiello, Michele Scarpiniti
PublisherInstitute of Electrical and Electronics Engineers
ISBN (Electronic)9798350324112
DOIs
StatePublished - 1 Jan 2023
Event33rd IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2023 - Rome, Italy
Duration: 17 Sep 202320 Sep 2023

Publication series

NameIEEE International Workshop on Machine Learning for Signal Processing, MLSP
Volume2023-September
ISSN (Print)2161-0363
ISSN (Electronic)2161-0371

Conference

Conference33rd IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2023
Country/TerritoryItaly
CityRome
Period17/09/2320/09/23

Keywords

  • Adversarial attacks
  • iterative optimizers

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Signal Processing

Fingerprint

Dive into the research topics of 'On the Interpretable Adversarial Sensitivity of Iterative Optimizers'. Together they form a unique fingerprint.

Cite this