On the Convergence Rate of Training Recurrent Neural Networks

10/29/2018
by   Zeyuan Allen-Zhu, et al.
0

Despite the huge success of deep learning, our understanding to how the non-convex neural networks are trained remains rather limited. Most of existing theoretical works only tackle neural networks with one hidden layer, and little is known for multi-layer neural networks. Recurrent neural networks (RNNs) are special multi-layer networks extensively used in natural language processing applications. They are particularly hard to analyze, comparing to feedforward networks, because the weight parameters are reused across the entire time horizon. We provide arguably the first theoretical understanding to the convergence speed of training RNNs. Specifically, when the number of neurons is sufficiently large ---meaning polynomial in the training data size and in the time horizon--- and when the weights are randomly initialized, we show that gradient descent and stochastic gradient descent both minimize the training loss in a linear convergence rate, that is, ε∝ e^-Ω(T).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset