TY - JOUR
T1 - Evaluating the Cybersecurity Risk of Real-world, Machine Learning Production Systems
AU - Bitton, Ron
AU - Maman, Nadav
AU - Singh, Inderjeet
AU - Momiyama, Satoru
AU - Elovici, Yuval
AU - Shabtai, Asaf
N1 - Publisher Copyright:
© 2023 Association for Computing Machinery.
PY - 2023/1/13
Y1 - 2023/1/13
N2 - Although cyberattacks on machine learning (ML) production systems can be harmful, today, security practitioners are ill-equipped, lacking methodologies and tactical tools that would allow them to analyze the security risks of their ML-based systems. In this article, we perform a comprehensive threat analysis of ML production systems. In this analysis, we follow the ontology presented by NIST for evaluating enterprise network security risk and apply it to ML-based production systems. Specifically, we (1) enumerate the assets of a typical ML production system, (2) describe the threat model (i.e., potential adversaries, their capabilities, and their main goal), (3) identify the various threats to ML systems, and (4) review a large number of attacks, demonstrated in previous studies, which can realize these threats. To quantify the risk posed by adversarial machine learning (AML) threat, we introduce a novel scoring system that assigns a severity score to different AML attacks. The proposed scoring system utilizes the analytic hierarchy process (AHP) for ranking - with the assistance of security experts - various attributes of the attacks. Finally, we developed an extension to the MulVAL attack graph generation and analysis framework to incorporate cyberattacks on ML production systems. Using this extension, security practitioners can apply attack graph analysis methods in environments that include ML components thus providing security practitioners with a methodological and practical tool for both evaluating the impact and quantifying the risk of a cyberattack targeting ML production systems.
AB - Although cyberattacks on machine learning (ML) production systems can be harmful, today, security practitioners are ill-equipped, lacking methodologies and tactical tools that would allow them to analyze the security risks of their ML-based systems. In this article, we perform a comprehensive threat analysis of ML production systems. In this analysis, we follow the ontology presented by NIST for evaluating enterprise network security risk and apply it to ML-based production systems. Specifically, we (1) enumerate the assets of a typical ML production system, (2) describe the threat model (i.e., potential adversaries, their capabilities, and their main goal), (3) identify the various threats to ML systems, and (4) review a large number of attacks, demonstrated in previous studies, which can realize these threats. To quantify the risk posed by adversarial machine learning (AML) threat, we introduce a novel scoring system that assigns a severity score to different AML attacks. The proposed scoring system utilizes the analytic hierarchy process (AHP) for ranking - with the assistance of security experts - various attributes of the attacks. Finally, we developed an extension to the MulVAL attack graph generation and analysis framework to incorporate cyberattacks on ML production systems. Using this extension, security practitioners can apply attack graph analysis methods in environments that include ML components thus providing security practitioners with a methodological and practical tool for both evaluating the impact and quantifying the risk of a cyberattack targeting ML production systems.
KW - Adversarial machine learning
KW - attack graphs
KW - risk assessment
KW - threat analysis
UR - http://www.scopus.com/inward/record.url?scp=85147846247&partnerID=8YFLogxK
U2 - 10.1145/3559104
DO - 10.1145/3559104
M3 - Article
AN - SCOPUS:85147846247
SN - 0360-0300
VL - 55
JO - ACM Computing Surveys
JF - ACM Computing Surveys
IS - 9
M1 - 183
ER -