TY - GEN
T1 - Efficient Model Extraction via Boundary Sampling
AU - Biton Dor, Maor
AU - Mirsky, Yisroel
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/11/22
Y1 - 2024/11/22
N2 - This paper introduces a novel data-free model extraction attack that significantly advances the current state-of-the-art in terms of efficiency, accuracy, and effectiveness. Traditional black-box methods rely on using the victim’s model as an oracle to label a vast number of samples within high-confidence areas. This approach not only requires an extensive number of queries but also results in a less accurate and less transferable model. In contrast, our method innovates by focusing on sampling low-confidence areas (along the decision boundaries) and employing an evolutionary algorithm to optimize the sampling process. These novel contributions allow for a dramatic reduction in the number of queries needed by the attacker by a factor of 10x to 600x while simultaneously improving the accuracy of the stolen model. Moreover, our approach improves boundary alignment, resulting in better transferability of adversarial examples from the stolen model to the victim’s model (increasing the attack success rate from 60% to 82% on average). Finally, we accomplish all of this with a strict black-box assumption on the victim, with no knowledge of the target’s architecture or dataset. We demonstrate our attack on three datasets with increasingly larger resolutions and compare our performance to four state-of-the-art model extraction attacks.
AB - This paper introduces a novel data-free model extraction attack that significantly advances the current state-of-the-art in terms of efficiency, accuracy, and effectiveness. Traditional black-box methods rely on using the victim’s model as an oracle to label a vast number of samples within high-confidence areas. This approach not only requires an extensive number of queries but also results in a less accurate and less transferable model. In contrast, our method innovates by focusing on sampling low-confidence areas (along the decision boundaries) and employing an evolutionary algorithm to optimize the sampling process. These novel contributions allow for a dramatic reduction in the number of queries needed by the attacker by a factor of 10x to 600x while simultaneously improving the accuracy of the stolen model. Moreover, our approach improves boundary alignment, resulting in better transferability of adversarial examples from the stolen model to the victim’s model (increasing the attack success rate from 60% to 82% on average). Finally, we accomplish all of this with a strict black-box assumption on the victim, with no knowledge of the target’s architecture or dataset. We demonstrate our attack on three datasets with increasingly larger resolutions and compare our performance to four state-of-the-art model extraction attacks.
KW - Black Box
KW - Data Free
KW - Evolutionary Algorithms
KW - Model Extraction
KW - Substitute Models
KW - Transfer Attacks
UR - http://www.scopus.com/inward/record.url?scp=85216515737&partnerID=8YFLogxK
U2 - 10.1145/3689932.3694756
DO - 10.1145/3689932.3694756
M3 - Conference contribution
AN - SCOPUS:85216515737
T3 - AISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with: CCS 2024
SP - 1
EP - 11
BT - AISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with
PB - Association for Computing Machinery, Inc
T2 - 16th ACM Workshop on Artificial Intelligence and Security, AISec 2024, co-located with CCS 2024
Y2 - 14 October 2024 through 18 October 2024
ER -