When Recurrent Models Don't Need To Be Recurrent

05/25/2018
by   John Miller, et al.
1

We prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent. Our result applies to a broad range of non-linear recurrent neural networks under a natural stability condition, which we observe is also necessary. Complementing our theoretical findings, we verify the conclusions of our theory on both real and synthetic tasks. Furthermore, we demonstrate recurrent models satisfying the stability assumption of our theory can have excellent performance on real sequence learning tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset