Super-Convergence with an Unstable Learning Rate

02/22/2021
by   Samet Oymak, et al.
0

Conventional wisdom dictates that learning rate should be in the stable regime so that gradient-based algorithms don't blow up. This note introduces a simple scenario where an unstable learning rate scheme leads to a super fast convergence, with the convergence rate depending only logarithmically on the condition number of the problem. Our scheme uses a Cyclical Learning Rate where we periodically take one large unstable step and several small stable steps to compensate for the instability. These findings also help explain the empirical observations of [Smith and Topin, 2019] where they claim CLR with a large maximum learning rate leads to "super-convergence". We prove that our scheme excels in the problems where Hessian exhibits a bimodal spectrum and the eigenvalues can be grouped into two clusters (small and large). The unstable step is the key to enabling fast convergence over the small eigen-spectrum.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset