Discussion Paper: Exploiting LLMs for Scam Automation: A Looming Threat

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    11 Scopus citations

    Abstract

    Large Language Models (LLMs) have enabled powerful new AI capabilities, but their potential misuse for automating scams and fraud poses a serious emerging threat. In this paper, we investigate how LLMs combined with speech synthesis and speech recognition could be leveraged to build automated systems for executing phone scams at scale. Our research reveals that current publicly accessible language models can, through advanced prompt engineering, mimic authorities and seek personal financial information, bypassing existing safeguards. As these models become more widely available, they significantly lower the barriers for executing complex AI-driven scams, including potential future threats like voice cloning for virtual kidnapping. Existing defences, such as passive detection is not suitable for synthetic voice over compressed channels. Therefore, we urgently call for multi-disciplinary research into user education, media forensics, regulatory measures, and AI safety enhancements to combat this growing risk. Without proactive measures, the rise in AI-enabled fraud could undermine consumer trust in the digital and economic landscape, emphasizing the need for a comprehensive strategy to prevent automated fraud.

    Original languageEnglish
    Title of host publicationACM WDC 2024 - Proceedings of the 3rd ACM Workshop on Security Implications of Deepfakes and Cheapfakes
    PublisherAssociation for Computing Machinery, Inc
    Pages20-24
    Number of pages5
    ISBN (Electronic)9798400704208
    DOIs
    StatePublished - 1 Jul 2024
    Event3rd ACM Workshop on Security Implications of Deepfakes and Cheapfakes, WDC 2024, held in conjunction with ACM AsiaCCS 2024 - Singapore, Singapore
    Duration: 1 Jul 20245 Jul 2024

    Publication series

    NameACM WDC 2024 - Proceedings of the 3rd ACM Workshop on Security Implications of Deepfakes and Cheapfakes

    Conference

    Conference3rd ACM Workshop on Security Implications of Deepfakes and Cheapfakes, WDC 2024, held in conjunction with ACM AsiaCCS 2024
    Country/TerritorySingapore
    CitySingapore
    Period1/07/245/07/24

    Keywords

    • AI Security
    • Deepfakes
    • LLM
    • Vishing

    ASJC Scopus subject areas

    • Computer Networks and Communications
    • Computer Science Applications
    • Information Systems
    • Software

    Fingerprint

    Dive into the research topics of 'Discussion Paper: Exploiting LLMs for Scam Automation: A Looming Threat'. Together they form a unique fingerprint.

    Cite this