Adversaries Strike Hard: Adversarial Attacks Against Malware Classifiers Using Dynamic API Calls as Features

2021 
Malware designers have become increasingly sophisticated over time, crafting polymorphic and metamorphic malware employing obfuscation tricks such as packing and encryption to evade signature-based malware detection systems. Therefore, security professionals use machine learning-based systems to toughen their defenses – based on malware’s dynamic behavioral features. However, these systems are susceptible to adversarial inputs. Some malware designers exploit this vulnerability to bypass detection. In this work, we develop two approaches to evade machine learning-based classifiers. First, we create a Generative Adversarial Networks (GAN) based method, which we call ‘Malware Evasion using GAN’ (MEGAN) and the extended version ‘Malware Evasion using GAN with Reduced Perturbation (MEGAN-RP).’ Second, we develop a novel reinforcement learning-based approach called ‘Malware Evasion using Reinforcement Agent (MERA).’ We generate adversarial malware that simultaneously minimizes the recall of a target classifier and the amount of perturbation needed in the actual malware to evade detection. We evaluate our work against 13 different BlackBox detection models – all of which use dynamic presence-absence of API calls as features. We observe that our approaches reduce the recall of almost all BlackBox models to zero. Further, MERA outperforms all the other models and reduces True Positive Rate (TPR) to zero against all target models except the Decision Tree (DT) – with minimum perturbation in 6 out of 13 target models. We also present experimental results on adversarial retraining defense and its evasion for GAN based strategies.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []