TY - GEN
T1 - Discussion Paper
T2 - 3rd ACM Workshop on Security Implications of Deepfakes and Cheapfakes, WDC 2024, held in conjunction with ACM AsiaCCS 2024
AU - Gressel, Gilad
AU - Pankajakshan, Rahul
AU - Mirsky, Yisroel
N1 - Publisher Copyright:
© 2024 Owner/Author.
PY - 2024/7/1
Y1 - 2024/7/1
N2 - Large Language Models (LLMs) have enabled powerful new AI capabilities, but their potential misuse for automating scams and fraud poses a serious emerging threat. In this paper, we investigate how LLMs combined with speech synthesis and speech recognition could be leveraged to build automated systems for executing phone scams at scale. Our research reveals that current publicly accessible language models can, through advanced prompt engineering, mimic authorities and seek personal financial information, bypassing existing safeguards. As these models become more widely available, they significantly lower the barriers for executing complex AI-driven scams, including potential future threats like voice cloning for virtual kidnapping. Existing defences, such as passive detection is not suitable for synthetic voice over compressed channels. Therefore, we urgently call for multi-disciplinary research into user education, media forensics, regulatory measures, and AI safety enhancements to combat this growing risk. Without proactive measures, the rise in AI-enabled fraud could undermine consumer trust in the digital and economic landscape, emphasizing the need for a comprehensive strategy to prevent automated fraud.
AB - Large Language Models (LLMs) have enabled powerful new AI capabilities, but their potential misuse for automating scams and fraud poses a serious emerging threat. In this paper, we investigate how LLMs combined with speech synthesis and speech recognition could be leveraged to build automated systems for executing phone scams at scale. Our research reveals that current publicly accessible language models can, through advanced prompt engineering, mimic authorities and seek personal financial information, bypassing existing safeguards. As these models become more widely available, they significantly lower the barriers for executing complex AI-driven scams, including potential future threats like voice cloning for virtual kidnapping. Existing defences, such as passive detection is not suitable for synthetic voice over compressed channels. Therefore, we urgently call for multi-disciplinary research into user education, media forensics, regulatory measures, and AI safety enhancements to combat this growing risk. Without proactive measures, the rise in AI-enabled fraud could undermine consumer trust in the digital and economic landscape, emphasizing the need for a comprehensive strategy to prevent automated fraud.
KW - AI Security
KW - Deepfakes
KW - LLM
KW - Vishing
UR - http://www.scopus.com/inward/record.url?scp=85198111179&partnerID=8YFLogxK
U2 - 10.1145/3660354.3660356
DO - 10.1145/3660354.3660356
M3 - Conference contribution
AN - SCOPUS:85198111179
T3 - ACM WDC 2024 - Proceedings of the 3rd ACM Workshop on Security Implications of Deepfakes and Cheapfakes
SP - 20
EP - 24
BT - ACM WDC 2024 - Proceedings of the 3rd ACM Workshop on Security Implications of Deepfakes and Cheapfakes
PB - Association for Computing Machinery, Inc
Y2 - 1 July 2024 through 5 July 2024
ER -