Adaptive Weight Decay: On The Fly Weight Decay Tuning for Improving Robustness
We introduce adaptive weight decay, which automatically tunes the hyper-parameter for weight decay during each training iteration. For classification problems, we propose changing the value of the weight decay hyper-parameter on the fly based on the strength of updates from the classification loss (i.e., gradient of cross-entropy), and the regularization loss (i.e., ℓ_2-norm of the weights). We show that this simple modification can result in large improvements in adversarial robustness – an area which suffers from robust overfitting – without requiring extra data. Specifically, our reformulation results in 20 for CIFAR-100, and 10 traditional weight decay. In addition, this method has other desirable properties, such as less sensitivity to learning rate, and smaller weight norms, which the latter contributes to robustness to overfitting to label noise, and pruning.
READ FULL TEXT