Sequence Squeezing: A Defense Method Against Adversarial Examples for API Call-Based RNN Variants.

Research output: Contribution to conferencePaperpeer-review

Abstract

Adversarial examples are known to mislead deep learning models so that the models will classify them incorrectly, even in domains where such models have achieved state-of-the-art performance. Until recently, research on both adversarial attack and defense methods focused on computer vision, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we present a novel defense method, referred to as sequence squeezing, aimed at making RNN variant (e.g., LSTM) classifiers more robust against such attacks. Our method differs from existing defense methods, which were designed only for non-sequence based models. We also implement three additional defense methods inspired by recently published CNN defense methods as baselines for our method. Using sequence squeezing, we were able to decrease the effectiveness of such adversarial attacks from 99.9% to 15%, outperforming all of the baseline defense methods.
Original languageEnglish GB
Pages1-10
Number of pages10
DOIs
StatePublished - 2021
Event2021 International Joint Conference on Neural Networks (IJCNN) -
Duration: 18 Jul 202122 Jul 2021

Conference

Conference2021 International Joint Conference on Neural Networks (IJCNN)
Period18/07/2122/07/21

Fingerprint

Dive into the research topics of 'Sequence Squeezing: A Defense Method Against Adversarial Examples for API Call-Based RNN Variants.'. Together they form a unique fingerprint.

Cite this