Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies

02/24/2017
by   Robert DiPietro, et al.
0

Recurrent neural networks (RNNs) have achieved state-of-the-art performance on many diverse tasks, from machine translation to surgical activity recognition, yet training RNNs to capture long-term dependencies remains difficult. To date, the vast majority of successful RNN architectures alleviate this problem by facilitating long-term gradient flow using nearly-additive connections between adjacent states, as originally introduced in long short-term memory (LSTM). In this paper, we investigate a different approach for encouraging gradient flow that is based on NARX RNNs, which generalize typical RNNs by allowing direct connections from the distant past. Analytically, we 1) generalize previous gradient decompositions for typical RNNs to general NARX RNNs and 2) formally connect gradient flow to edges along paths. We then introduce an example architecture that is based on these ideas, and we demonstrate that this architecture matches or exceeds LSTM performance on 5 diverse tasks. Finally we describe many avenues for future work, including the exploration of other NARX RNN architectures, the possible combination of mechanisms from LSTM and NARX RNNs, and the adoption of recent LSTM-based advances to NARX RNN architectures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset