The Security of Deep Learning Defences for Medical Imaging.

Moshe Levy, Guy Amit, Yuval Elovici, Yisroel Mirsky

Research output: Working paper/PreprintPreprint

Abstract

Deep learning has shown great promise in the domain of medical image analysis. Medical professionals and healthcare providers have been adopting the technology to speed up and enhance their work. These systems use deep neural networks (DNN) which are vulnerable to adversarial samples; images with imperceivable changes that can alter the model's prediction. Researchers have proposed defences which either make a DNN more robust or detect the adversarial samples before they do harm. However, none of these works consider an informed attacker which can adapt to the defence mechanism. We show that an informed attacker can evade five of the current state of the art defences while successfully fooling the victim's deep learning model, rendering these defences useless. We then suggest better alternatives for securing healthcare DNNs from such attacks: (1) harden the system's security and (2) use digital signatures.
Original languageEnglish
Volumeabs/2201.08661
StatePublished - 2022

Publication series

NameCoRR

Fingerprint

Dive into the research topics of 'The Security of Deep Learning Defences for Medical Imaging.'. Together they form a unique fingerprint.

Cite this