TY - GEN
T1 - The Security of Deep Learning Defenses in Medical Imaging
AU - Levy, Moshe
AU - Amit, Guy
AU - Elovici, Yuval
AU - Mirsky, Yisroel
N1 - Publisher Copyright:
© 2023 Owner/Author.
PY - 2024/11/21
Y1 - 2024/11/21
N2 - Deep learning has shown great promise in the medical image analysis domain. Medical professionals and healthcare providers have begun to adopt this technology to accelerate and enhance their work. These systems use deep neural networks (DNNs) which are vulnerable to adversarial samples: images with imperceivable changes that can alter the model's prediction. Prior research has proposed defenses aimed at making DNNs more robust or detecting the adversarial samples before they can do any harm. However, none of the studies considered an informed attacker capable of adapting the attack to the defense mechanism. In this qualitative study, we show that an informed attacker can evade five advanced defenses, successfully fooling the victim deep learning model and rendering the defense useless. We also propose two alternative means of securing healthcare DNNs from such attacks: (1) hardening the system's security, and (2) using digital signatures.
AB - Deep learning has shown great promise in the medical image analysis domain. Medical professionals and healthcare providers have begun to adopt this technology to accelerate and enhance their work. These systems use deep neural networks (DNNs) which are vulnerable to adversarial samples: images with imperceivable changes that can alter the model's prediction. Prior research has proposed defenses aimed at making DNNs more robust or detecting the adversarial samples before they can do any harm. However, none of the studies considered an informed attacker capable of adapting the attack to the defense mechanism. In this qualitative study, we show that an informed attacker can evade five advanced defenses, successfully fooling the victim deep learning model and rendering the defense useless. We also propose two alternative means of securing healthcare DNNs from such attacks: (1) hardening the system's security, and (2) using digital signatures.
KW - adaptive adversarial attacks
KW - deep learning
UR - http://www.scopus.com/inward/record.url?scp=85215091793&partnerID=8YFLogxK
U2 - 10.1145/3689942.3694746
DO - 10.1145/3689942.3694746
M3 - Conference contribution
AN - SCOPUS:85215091793
T3 - HealthSec 2024 - Proceedings of the 2024 Workshop on Cybersecurity in Healthcare, Co-Located with: CCS 2024
SP - 37
EP - 44
BT - HealthSec 2024 - Proceedings of the 2024 Workshop on Cybersecurity in Healthcare, Co-Located with
PB - Association for Computing Machinery, Inc
T2 - 2024 Workshop on Cybersecurity in Healthcare, HealthSec 2024
Y2 - 14 October 2024 through 18 October 2024
ER -