Robust Learning Rate Selection for Stochastic Optimization via Splitting Diagnostic
This paper proposes SplitSGD, a new stochastic optimization algorithm with a dynamic learning rate selection rule. This procedure decreases the learning rate for better adaptation to the local geometry of the objective function whenever a stationary phase is detected, that is, the iterates are likely to bounce around a vicinity of a local minimum. The detection is performed by splitting the single thread into two and using the inner products of the gradients from the two threads as a measure of stationarity. This learning rate selection is provably valid, robust to initial parameters, easy-to-implement, and essentially does not incur additional computational cost. Finally, we illustrate the robust convergence properties of SplitSGD through extensive experiments.
READ FULL TEXT