Armour: Generalizable Compact Self-Attention for Vision Transformers

08/03/2021
by   Lingchuan Meng, et al.
0

Attention-based transformer networks have demonstrated promising potential as their applications extend from natural language processing to vision. However, despite the recent improvements, such as sub-quadratic attention approximation and various training enhancements, the compact vision transformers to date using the regular attention still fall short in comparison with its convnet counterparts, in terms of accuracy, model size, and throughput. This paper introduces a compact self-attention mechanism that is fundamental and highly generalizable. The proposed method reduces redundancy and improves efficiency on top of the existing attention optimizations. We show its drop-in applicability for both the regular attention mechanism and some most recent variants in vision transformers. As a result, we produced smaller and faster models with the same or better accuracies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset