TransfoRNN: Capturing the Sequential Information in Self-Attention Representations for Language Modeling

04/04/2021
by   Tze Yuang Chong, et al.
16

In this paper, we describe the use of recurrent neural networks to capture sequential information from the self-attention representations to improve the Transformers. Although self-attention mechanism provides a means to exploit long context, the sequential information, i.e. the arrangement of tokens, is not explicitly captured. We propose to cascade the recurrent neural networks to the Transformers, which referred to as the TransfoRNN model, to capture the sequential information. We found that the TransfoRNN models which consists of only shallow Transformers stack is suffice to give comparable, if not better, performance than a deeper Transformer model. Evaluated on the Penn Treebank and WikiText-2 corpora, the proposed TransfoRNN model has shown lower model perplexities with fewer number of model parameters. On the Penn Treebank corpus, the model perplexities were reduced up to 5.5 reduced up to 10.5 up to 2.2 on the LibriSpeech speech recognition task and has shown comparable results with the Transformer models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset