Accelerating Natural Gradient with Higher-Order Invariance

03/04/2018
by   Yang Song, et al.
0

An appealing property of the natural gradient is that it is invariant to arbitrary differentiable reparameterizations of the model. However, this invariance property requires infinitesimal steps and is lost in practical implementations with small but finite step sizes. In this paper, we study invariance properties from a combined perspective of Riemannian geometry and numerical differential equation solving. We define the order of invariance of a numerical method to be its convergence order to an invariant solution. We propose to use higher-order integrators and corrections based on geodesics to obtain more invariant optimization trajectories. We prove the numerical convergence properties of geodesic corrected updates and show that they can be as computational efficient as plain natural gradient. Experimentally, we demonstrate that invariance leads to faster training and our techniques improve on traditional natural gradient in optimizing synthetic objectives as well as deep classifiers and autoencoders.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset