TY - CONF
T1 - Sequence Squeezing: A Defense Method Against Adversarial Examples for API Call-Based RNN Variants.
AU - Rosenberg, Ishai
AU - Shabtai, Asaf
AU - Elovici, Yuval
AU - Rokach, Lior
N1 - DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2021
Y1 - 2021
N2 - Adversarial examples are known to mislead deep learning models so that the models will classify them incorrectly, even in domains where such models have achieved state-of-the-art performance. Until recently, research on both adversarial attack and defense methods focused on computer vision, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we present a novel defense method, referred to as sequence squeezing, aimed at making RNN variant (e.g., LSTM) classifiers more robust against such attacks. Our method differs from existing defense methods, which were designed only for non-sequence based models. We also implement three additional defense methods inspired by recently published CNN defense methods as baselines for our method. Using sequence squeezing, we were able to decrease the effectiveness of such adversarial attacks from 99.9% to 15%, outperforming all of the baseline defense methods.
AB - Adversarial examples are known to mislead deep learning models so that the models will classify them incorrectly, even in domains where such models have achieved state-of-the-art performance. Until recently, research on both adversarial attack and defense methods focused on computer vision, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we present a novel defense method, referred to as sequence squeezing, aimed at making RNN variant (e.g., LSTM) classifiers more robust against such attacks. Our method differs from existing defense methods, which were designed only for non-sequence based models. We also implement three additional defense methods inspired by recently published CNN defense methods as baselines for our method. Using sequence squeezing, we were able to decrease the effectiveness of such adversarial attacks from 99.9% to 15%, outperforming all of the baseline defense methods.
U2 - 10.1109/IJCNN52387.2021.9534432
DO - 10.1109/IJCNN52387.2021.9534432
M3 - ???researchoutput.researchoutputtypes.contributiontoconference.paper???
SP - 1
EP - 10
Y2 - 18 July 2021 through 22 July 2021
ER -