Beyond Lazy Training for Over-parameterized Tensor Decomposition

10/22/2020
by   Xiang Wang, et al.
0

Over-parametrization is an important technique in training neural networks. In both theory and practice, training a larger network allows the optimization algorithm to avoid bad local optimal solutions. In this paper we study a closely related tensor decomposition problem: given an l-th order tensor in (R^d)^⊗ l of rank r (where r≪ d), can variants of gradient descent find a rank m decomposition where m > r? We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least m = Ω(d^l-1), while a variant of gradient descent can find an approximate tensor when m = O^*(r^2.5llog d). Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset