TY - GEN
T1 - SoK
T2 - 2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025
AU - Schroer, Saskia Laura
AU - Apruzzese, Giovanni
AU - Human, Soheil
AU - Laskov, Pavel
AU - Anderson, Hyrum S.
AU - Bernroider, Edward W.N.
AU - Fass, Aurore
AU - Nassi, Ben
AU - Rimmer, Vera
AU - Roli, Fabio
AU - Salam, Samer
AU - Ashley Shen, Chi En
AU - Sunyaev, Ali
AU - Wadhwa-Brown, Tim
AU - Wagner, Isabel
AU - Wang, Gang
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - Our society increasingly benefits from Artificial Intelligence (AI). Unfortunately, more and more evidence shows that AI is also used for offensive purposes. Prior works have revealed various examples of use cases in which the deployment of AI can lead to violation of security and privacy objectives. No extant work, however, has been able to draw a holistic picture of the offensive potential of AI. In this SoK paper we seek to lay the ground for a systematic analysis of the heterogeneous capabilities of offensive AI. In particular we (i) account for AI risks to both humans and systems while (ii) consolidating and distilling knowledge from academic literature, expert opinions, industrial venues, as well as laypeople - all of which being valuable sources of information on offensive AI. To enable alignment of such diverse sources of knowledge, we devise a common set of criteria reflecting essential technological factors related to offensive AI. With the help of such criteria, we systematically analyze: 95 research papers; 38 InfoSec briefings (from, e.g., BlackHat); the responses of a user study (N=549) entailing individuals with diverse backgrounds and expertise; and the opinion of 12 experts. Our contributions not only reveal concerning ways (some of which overlooked by prior work) in which AI can be offensively used today, but also represent a foothold to address this threat in the years to come.
AB - Our society increasingly benefits from Artificial Intelligence (AI). Unfortunately, more and more evidence shows that AI is also used for offensive purposes. Prior works have revealed various examples of use cases in which the deployment of AI can lead to violation of security and privacy objectives. No extant work, however, has been able to draw a holistic picture of the offensive potential of AI. In this SoK paper we seek to lay the ground for a systematic analysis of the heterogeneous capabilities of offensive AI. In particular we (i) account for AI risks to both humans and systems while (ii) consolidating and distilling knowledge from academic literature, expert opinions, industrial venues, as well as laypeople - all of which being valuable sources of information on offensive AI. To enable alignment of such diverse sources of knowledge, we devise a common set of criteria reflecting essential technological factors related to offensive AI. With the help of such criteria, we systematically analyze: 95 research papers; 38 InfoSec briefings (from, e.g., BlackHat); the responses of a user study (N=549) entailing individuals with diverse backgrounds and expertise; and the opinion of 12 experts. Our contributions not only reveal concerning ways (some of which overlooked by prior work) in which AI can be offensively used today, but also represent a foothold to address this threat in the years to come.
KW - cyber security
KW - machine learning
KW - society
UR - https://www.scopus.com/pages/publications/105007295194
U2 - 10.1109/SaTML64287.2025.00021
DO - 10.1109/SaTML64287.2025.00021
M3 - Conference contribution
AN - SCOPUS:105007295194
T3 - Proceedings - 2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025
SP - 247
EP - 280
BT - Proceedings - 2025 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2025
PB - Institute of Electrical and Electronics Engineers
Y2 - 9 April 2025 through 11 April 2025
ER -