Gradient Methods Provably Converge to Non-Robust Networks

02/09/2022
by   Gal Vardi, et al.
0

Despite a great deal of research, it is still unclear why neural networks are so susceptible to adversarial examples. In this work, we identify natural settings where depth-2 ReLU networks trained with gradient flow are provably non-robust (susceptible to small adversarial ℓ_2-perturbations), even when robust networks that classify the training dataset correctly exist. Perhaps surprisingly, we show that the well-known implicit bias towards margin maximization induces bias towards non-robust networks, by proving that every network which satisfies the KKT conditions of the max-margin problem is non-robust.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset