Pay Attention when Required

09/09/2020
by   Swetha Mandava, et al.
0

Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning. In this paper, we explored trade-offs and ordering of the blocks to improve upon the current Transformer architecture and proposed PAR Transformer. It needs 35 achieved by replacing  63 blocks, and retains the perplexity on WikiText-103 language modelling benchmark. We further validated our results on text8 and enwiki8 datasets, as well as on the BERT model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset