TY - JOUR
T1 - Practical Evaluation of Poisoning Attacks on Online Anomaly Detectors in Industrial Control Systems
AU - Kravchik, Moshe
AU - Demetrio, Luca
AU - Biggio, Battista
AU - Shabtai, Asaf
N1 - Publisher Copyright:
© 2022
PY - 2022/11/1
Y1 - 2022/11/1
N2 - Recently, neural networks (NNs) have been proposed for the detection of cyber attacks targeting industrial control systems (ICSs). Such detectors are often retrained, using data collected during system operation, to cope with the evolution of the monitored signals over time. However, by exploiting this mechanism, an attacker can fake the signals provided by corrupted sensors at training time and poison the learning process of the detector to allow cyber attacks to stay undetected at test time. Previous work explored the ability to generate adversarial samples that fool anomaly detection models in ICSs but without compromising their training process. With this research, we are the first to demonstrate such poisoning attacks on ICS cyber attack online detectors based on neural networks. We propose two distinct attack algorithms, namely, interpolation- and back-gradient-based poisoning, and demonstrate their effectiveness. The evaluation is conducted on diverse data sources: synthetic data, real-world ICS testbed data, and a simulation of the Tennessee Eastman process. This first practical evaluation of poisoning attacks using a simulation tool highlights the challenges of poisoning dynamically controlled systems. The generality of the proposed methods under different NN parameters and architectures is studied. Lastly, we propose and analyze some potential mitigation strategies.
AB - Recently, neural networks (NNs) have been proposed for the detection of cyber attacks targeting industrial control systems (ICSs). Such detectors are often retrained, using data collected during system operation, to cope with the evolution of the monitored signals over time. However, by exploiting this mechanism, an attacker can fake the signals provided by corrupted sensors at training time and poison the learning process of the detector to allow cyber attacks to stay undetected at test time. Previous work explored the ability to generate adversarial samples that fool anomaly detection models in ICSs but without compromising their training process. With this research, we are the first to demonstrate such poisoning attacks on ICS cyber attack online detectors based on neural networks. We propose two distinct attack algorithms, namely, interpolation- and back-gradient-based poisoning, and demonstrate their effectiveness. The evaluation is conducted on diverse data sources: synthetic data, real-world ICS testbed data, and a simulation of the Tennessee Eastman process. This first practical evaluation of poisoning attacks using a simulation tool highlights the challenges of poisoning dynamically controlled systems. The generality of the proposed methods under different NN parameters and architectures is studied. Lastly, we propose and analyze some potential mitigation strategies.
KW - Adversarial machine learning
KW - Adversarial robustness
KW - Anomaly detection
KW - Autoencoders
KW - Industrial control systems
KW - Poisoning attacks
UR - http://www.scopus.com/inward/record.url?scp=85137170144&partnerID=8YFLogxK
U2 - 10.1016/j.cose.2022.102901
DO - 10.1016/j.cose.2022.102901
M3 - Article
AN - SCOPUS:85137170144
SN - 0167-4048
VL - 122
JO - Computers and Security
JF - Computers and Security
M1 - 102901
ER -