Transpose Attack: Stealing Datasets with Bidirectional Training

Guy Amit, Moshe Levy, Yisroel Mirsky

Research output: Contribution to conferencePaper

Abstract

Deep neural networks are normally executed in the forward direction. However, in this work, we identify a vulnerability that enables models to be trained in both directions and on different tasks. Adversaries can exploit this capability to hide rogue models within seemingly legitimate models. In addition, in this work we show that neural networks can be taught to systematically memorize and retrieve specific samples from datasets. Together, these findings expose a novel method in which adversaries can exfiltrate datasets from protected learning environments under the guise of legitimate models.
We focus on the data exfiltration attack and show that modern architectures can be used to secretly exfiltrate tens of thousands of samples with high fidelity, high enough to compromise data privacy and even train new models. Moreover, to mitigate this threat we propose a novel approach for detecting infected models.
Original languageEnglish
Number of pages18
StatePublished - 2024
EventNetwork and Distributed System Security (NDSS) Symposium - San diego, United States
Duration: 26 Feb 20241 Mar 2024

Conference

ConferenceNetwork and Distributed System Security (NDSS) Symposium
Country/TerritoryUnited States
CitySan diego
Period26/02/241/03/24

Fingerprint

Dive into the research topics of 'Transpose Attack: Stealing Datasets with Bidirectional Training'. Together they form a unique fingerprint.

Cite this