Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation

Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai

Research output: Working paper/PreprintPreprint

12 Downloads (Pure)

Abstract

Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years. The proposed attacks aimed solely at compromising the models' integrity (i.e., trustworthiness of the model's prediction), while adversarial attacks targeting the models' availability, a critical aspect in safety-critical domains such as autonomous driving, have not been explored by the machine learning research community. In this paper, we propose NMS-Sponge, a novel approach that negatively affects the decision latency of YOLO, a state-of-the-art object detector, and compromises the model's availability by applying a universal adversarial perturbation (UAP). In our experiments, we demonstrate that the proposed UAP is able to increase the processing time of individual frames by adding "phantom" objects while preserving the detection of the original objects.
Original languageEnglish
StatePublished - 26 May 2022

Keywords

  • cs.CV
  • cs.CR
  • cs.LG

Fingerprint

Dive into the research topics of 'Denial-of-Service Attack on Object Detection Model Using Universal Adversarial Perturbation'. Together they form a unique fingerprint.

Cite this