AxWin Transformer: A Context-Aware Vision Transformer Backbone with Axial Windows

05/02/2023
by   Fangjian Lin, et al.
0

Recently Transformer has shown good performance in several vision tasks due to its powerful modeling capabilities. To reduce the quadratic complexity caused by the attention, some outstanding work restricts attention to local regions or extends axial interactions. However, these methos often lack the interaction of local and global information, balancing coarse and fine-grained information. To address this problem, we propose AxWin Attention, which models context information in both local windows and axial views. Based on the AxWin Attention, we develop a context-aware vision transformer backbone, named AxWin Transformer, which outperforming the state-of-the-art methods in both classification and downstream segmentation and detection tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset