SoK: On the Offensive Potential of AI

  • Saskia Laura Schroer
  • , Giovanni Apruzzese
  • , Soheil Human
  • , Pavel Laskov
  • , Hyrum S. Anderson
  • , Edward W.N. Bernroider
  • , Aurore Fass
  • , Ben Nassi
  • , Vera Rimmer
  • , Fabio Roli
  • , Samer Salam
  • , Chi En Ashley Shen
  • , Ali Sunyaev
  • , Tim Wadhwa-Brown
  • , Isabel Wagner
  • , Gang Wang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

5 Scopus citations

Abstract

Our society increasingly benefits from Artificial Intelligence (AI). Unfortunately, more and more evidence shows that AI is also used for offensive purposes. Prior works have revealed various examples of use cases in which the deployment of AI can lead to violation of security and privacy objectives. No extant work, however, has been able to draw a holistic picture of the offensive potential of AI. In this SoK paper we seek to lay the ground for a systematic analysis of the heterogeneous capabilities of offensive AI. In particular we (i) account for AI risks to both humans and systems while (ii) consolidating and distilling knowledge from academic literature, expert opinions, industrial venues, as well as laypeople - all of which being valuable sources of information on offensive AI. To enable alignment of such diverse sources of knowledge, we devise a common set of criteria reflecting essential technological factors related to offensive AI. With the help of such criteria, we systematically analyze: 95 research papers; 38 InfoSec briefings (from, e.g., BlackHat); the responses of a user study (N=549) entailing individuals with diverse backgrounds and expertise; and the opinion of 12 experts. Our contributions not only reveal concerning ways (some of which overlooked by prior work) in which AI can be offensively used today, but also represent a foothold to address this threat in the years to come.

Original languageEnglish
Title of host publicationProceedings - 2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025
PublisherInstitute of Electrical and Electronics Engineers
Pages247-280
Number of pages34
ISBN (Electronic)9798331517113
DOIs
StatePublished - 1 Jan 2025
Externally publishedYes
Event2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025 - Copenhagen, Denmark
Duration: 9 Apr 202511 Apr 2025

Publication series

NameProceedings - 2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025

Conference

Conference2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025
Country/TerritoryDenmark
CityCopenhagen
Period9/04/2511/04/25

Keywords

  • cyber security
  • machine learning
  • society

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Safety, Risk, Reliability and Quality
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'SoK: On the Offensive Potential of AI'. Together they form a unique fingerprint.

Cite this