Local Linearity and Double Descent in Catastrophic Overfitting

11/21/2021
by   Varun Sivashankar, et al.
0

Catastrophic overfitting is a phenomenon observed during Adversarial Training (AT) with the Fast Gradient Sign Method (FGSM) where the test robustness steeply declines over just one epoch in the training stage. Prior work has attributed this loss in robustness to a sharp decrease in local linearity of the neural network with respect to the input space, and has demonstrated that introducing a local linearity measure as a regularization term prevents catastrophic overfitting. Using a simple neural network architecture, we experimentally demonstrate that maintaining high local linearity might be sufficient to prevent catastrophic overfitting but is not necessary. Further, inspired by Parseval networks, we introduce a regularization term to AT with FGSM to make the weight matrices of the network orthogonal and study the connection between orthogonality of the network weights and local linearity. Lastly, we identify the double descent phenomenon during the adversarial training process.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset