On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition

02/20/2018
by   Marco Mondelli, et al.
0

We establish connections between the problem of learning a two-layers neural network with good generalization error and tensor decomposition. We consider a model with input x ∈ R^d, r hidden units with weights { w_i}_1< i < r and output y∈ R, i.e., y=∑_i=1^r σ(〈 x, w_i〉), where σ denotes the activation function. First, we show that, if we cannot learn the weights { w_i}_1< i < r accurately, then the neural network does not generalize well. More specifically, the generalization error is close to that of a trivial predictor with access only to the norm of the input. This result holds for any activation function, and it requires that the weights are roughly isotropic and the input distribution is Gaussian, which is a typical assumption in the theoretical literature. Then, we show that the problem of learning the weights { w_i}_1< i < r is at least as hard as the problem of tensor decomposition. This result holds for any input distribution and assumes that the activation function is a polynomial whose degree is related to the order of the tensor to be decomposed. By putting everything together, we prove that learning a two-layers neural network that generalizes well is at least as hard as tensor decomposition. It has been observed that neural network models with more parameters than training samples often generalize well, even if the problem is highly underdetermined. This means that the learning algorithm does not estimate the weights accurately and yet is able to yield a good generalization error. This paper shows that such a phenomenon cannot occur when the input distribution is Gaussian and the weights are roughly isotropic. We also provide numerical evidence supporting our theoretical findings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset