Fine-Tuning Pre-trained Transformers into Decaying Fast Weights

10/09/2022
by   Huanru Henry Mao, et al.
0

Autoregressive Transformers are strong language models but incur O(T) complexity during per-token generation due to the self-attention mechanism. Recent work proposes kernel-based methods to approximate causal self-attention by replacing it with recurrent formulations with various update rules and feature maps to achieve O(1) time and memory complexity. We explore these approaches and find that they are unnecessarily complex, and propose a simple alternative - decaying fast weights - that runs fast on GPU, outperforms prior methods, and retains 99 competitive performance on WikiText-103 against more complex attention substitutes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset