Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge
Malware continues to be a major cyber threat, despite the tremendous effort that has been made to combat them. The number of malware in the wild steadily increases over time, meaning that we must resort to automated defense techniques. This naturally calls for machine learning based malware detection. However, machine learning is known to be vulnerable to adversarial evasion attacks that manipulate a small number of features to make classifiers wrongly recognize a malware sample as a benign one. The state-of-the-art is that there are no effective countermeasures against these attacks. Inspired by the AICS'2019 Challenge, we systematize a number of principles for enhancing the robustness of neural networks against adversarial malware evasion attacks. Some of these principles have been scattered in the literature, but others are proposed in this paper for the first time. Under the guidance of these principles, we propose a framework and an accompanying training algorithm, which are then applied to the AICS'2019 challenge. Our experimental results have been submitted to the challenge organizer for evaluation.
READ FULL TEXT