Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Data Manifolds
Despite a great deal of research, it is still not well-understood why trained neural networks are highly vulnerable to adversarial examples. In this work we focus on two-layer neural networks trained using data which lie on a low dimensional linear subspace. We show that standard gradient methods lead to non-robust neural networks, namely, networks which have large gradients in directions orthogonal to the data subspace, and are susceptible to small adversarial L_2-perturbations in these directions. Moreover, we show that decreasing the initialization scale of the training algorithm, or adding L_2 regularization, can make the trained network more robust to adversarial perturbations orthogonal to the data.
READ FULL TEXT