Automatic Generation of Adversarial Examples for Interpreting Malware Classifiers

03/06/2020
by   Wei Song, et al.
0

Recent advances in adversarial attacks have shown that machine learning classifiers based on static analysis are vulnerable to adversarial attacks. However, real-world antivirus systems do not rely only on static classifiers, thus many of these static evasions get detected by dynamic analysis whenever the malware runs. The real question is to what extent these adversarial attacks are actually harmful to the real users? In this paper, we propose a systematic framework to create and evaluate realistic adversarial malware to evade real-world systems. We propose new adversarial attacks against real-world antivirus systems based on code randomization and binary manipulation, and use our framework to perform the attacks on 1000 malware samples and test four commercial antivirus software and two open-source classifiers. We demonstrate that the static detectors of real-world antivirus can be evaded 24.3 the cases and often by changing only one byte. We also find that the adversarial attacks are transferable between different antivirus up to 16 the cases. We also tested the efficacy of the complete (i.e. static + dynamic) classifiers in protecting users. While most of the commercial antivirus use their dynamic engines to protect the users' device when the static classifiers are evaded, we are the first to demonstrate that for one commercial antivirus, static evasions can also evade the offline dynamic detectors and infect users' machines. Our framework can also help explain which features are responsible for evasion and thus can help improve the robustness of malware detectors.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset