Frank-Wolfe optimization for deep networks

06/06/2020
by   Jakob Stigenberg, et al.
0

Deep neural networks is today one of the most popular choices in classification, regression and function approximation. However, the training of such deep networks is far from trivial as there are often millions of parameters to tune. Typically, one use some optimization method that hopefully converges towards some minimum. The most popular and successful methods are based on gradient descent. In this paper, another optimization method, Frank-Wolfe optimization, is applied to a small deep network and compared to gradient descent. Although the optimization does converge, it does so slowly and not close to the speed of gradient descent. Further, in a stochastic setting, the optimization becomes very unstable and does not seem to converge unless one uses a line search approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset