Dodging Attack Using Carefully Crafted Natural Makeup

Nitzan Guetta, Asaf Shabtai, Inderjeet Singh, Satoru Momiyama, Yuval Elovici

Research output: Working paper/PreprintPreprint

87 Downloads (Pure)

Abstract

Deep learning face recognition models are used by state-of-the-art surveillance systems to identify individuals passing through public areas (e.g., airports). Previous studies have demonstrated the use of adversarial machine learning (AML) attacks to successfully evade identification by such systems, both in the digital and physical domains. Attacks in the physical domain, however, require significant manipulation to the human participant's face, which can raise suspicion by human observers (e.g. airport security officers). In this study, we present a novel black-box AML attack which carefully crafts natural makeup, which, when applied on a human participant, prevents the participant from being identified by facial recognition models. We evaluated our proposed attack against the ArcFace face recognition model, with 20 participants in a real-world setup that includes two cameras, different shooting angles, and different lighting conditions. The evaluation results show that in the digital domain, the face recognition system was unable to identify all of the participants, while in the physical domain, the face recognition system was able to identify the participants in only 1.22% of the frames (compared to 47.57% without makeup and 33.73% with random natural makeup), which is below a reasonable threshold of a realistic operational environment.
Original languageEnglish
DOIs
StatePublished - 14 Sep 2021

Keywords

  • cs.CV
  • cs.CR
  • cs.LG

Fingerprint

Dive into the research topics of 'Dodging Attack Using Carefully Crafted Natural Makeup'. Together they form a unique fingerprint.

Cite this