MaskDGA: An Evasion Attack against DGA Classifiers and Adversarial Defenses

Lior Sidi, Asaf Nadler, Asaf Shabtai

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Domain generation algorithms (DGAs) are commonly used by botnets to generate domain names that bots can use to establish communication channels with their command and control servers. Recent publications presented deep learning classifiers that detect algorithmically generated domain (AGD) names in real time with high accuracy and thus significantly reduce the effectiveness of DGAs for botnet communication. In this paper, we present MaskDGA, an evasion technique that uses adversarial learning to modify AGD names in order to evade inline DGA classifiers, without the need for the attacker to possess any knowledge about the DGA classifier's architecture or parameters. MaskDGA was evaluated on four state-of-the-art DGA classifiers and outperformed the recently proposed CharBot and DeepDGA evasion techniques. We also evaluated MaskDGA on enhanced versions of the same classifiers equipped with common adversarial defenses (distillation and adversarial retraining). While the results show that adversarial retraining has some limited effectiveness against the evasion technique, it is clear that a more resilient detection mechanism is required. We also propose an extension to MaskDGA that allows an attacker to omit a subset of the modified AGD names based on the classification results of the attacker's trained model, in order to achieve a desired evasion rate.

Original languageEnglish
Article number9194015
Pages (from-to)161580-161592
Number of pages13
JournalIEEE Access
Volume8
DOIs
StatePublished - 1 Jan 2020

Keywords

  • Adversarial learning
  • DGA
  • botnets
  • deep learning

Fingerprint

Dive into the research topics of 'MaskDGA: An Evasion Attack against DGA Classifiers and Adversarial Defenses'. Together they form a unique fingerprint.

Cite this