IADA: Iterative Adversarial Data Augmentation Using Formal Verification and Expert Guidance

08/16/2021
by   Ruixuan Liu, et al.
0

Neural networks (NNs) are widely used for classification tasks for their remarkable performance. However, the robustness and accuracy of NNs heavily depend on the training data. In many applications, massive training data is usually not available. To address the challenge, this paper proposes an iterative adversarial data augmentation (IADA) framework to learn neural network models from an insufficient amount of training data. The method uses formal verification to identify the most "confusing" input samples, and leverages human guidance to safely and iteratively augment the training data with these samples. The proposed framework is applied to an artificial 2D dataset, the MNIST dataset, and a human motion dataset. By applying IADA to fully-connected NN classifiers, we show that our training method can improve the robustness and accuracy of the learned model. By comparing to regular supervised training, on the MNIST dataset, the average perturbation bound improved 107.4 the 2D dataset, the MNIST dataset, and the human motion dataset respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset