Target Training Does Adversarial Training Without Adversarial Samples
Neural network classifiers are vulnerable to misclassification of adversarial samples, for which the current best defense trains classifiers with adversarial samples. However, adversarial samples are not optimal for steering attack convergence, based on the minimization at the core of adversarial attacks. The minimization perturbation term can be minimized towards 0 by replacing adversarial samples in training with duplicated original samples, labeled differently only for training. Using only original samples, Target Training eliminates the need to generate adversarial samples for training against all attacks that minimize perturbation. In low-capacity classifiers and without using adversarial samples, Target Training exceeds both default CIFAR10 accuracy (84.3 against CW-L_2(κ=0) attack, and 86.6 adversarial samples against attacks that do not minimize perturbation, Target Training exceeds current best defense (69.1 CW-L_2(κ=40) in CIFAR10.
READ FULL TEXT