Parallelizing Linear Recurrent Neural Nets Over Sequence Length

09/12/2017
by   Eric Martin, et al.
0

Recurrent neural networks (RNNs) are widely used to model sequential data but their non-linear dependencies between sequence elements prevent parallelizing training over sequence length. We show the training of RNNs with only linear sequential dependencies can be parallelized over the sequence length using the parallel scan algorithm, leading to rapid training on long sequences with small minibatch size. We abstract prior linear sequence models into a new framework of linear surrogate RNNs and develop a linear surrogate long short-term memory (LS-LSTM) powered by a parallel linear recurrence CUDA kernel we implemented. We evaluate the LS-LSTM on a long sequence noisy autoregressive task and find the LS-LSTM achieves slightly superior train and test performance to a similar sized LSTM in 4x less training time. We analyze latency and throughput of the LS-LSTM and find the LS-LSTM reaches up to 175x the throughput of the LSTM in the small minibatch long sequence regime.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset