Fast Linear Convergence of Randomized BFGS

02/26/2020
by   Dmitry Kovalev, et al.
0

Since the late 1950's when quasi-Newton methods first appeared, they have become one of the most widely used and efficient algorithmic paradigms for unconstrained optimization. Despite their immense practical success, there is little theory that shows why these methods are so efficient. We provide a semi-local rate of convergence for the randomized BFGS method which can be significantly better than that of gradient descent, finally giving theoretical evidence supporting the superior empirical performance of the method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset