Training Ensembles to Detect Adversarial Examples

12/11/2017
by   Alexander Bagnall, et al.
0

We propose a new ensemble method for detecting and classifying adversarial examples generated by state-of-the-art attacks, including DeepFool and C&W. Our method works by training the members of an ensemble to have low classification error on random benign examples while simultaneously minimizing agreement on examples outside the training distribution. We evaluate on both MNIST and CIFAR-10, against oblivious and both white- and black-box adversaries.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset