Deepening Neural Networks Implicitly and Locally via Recurrent Attention Strategy

10/27/2022
by   Shanshan Zhong, et al.
0

More and more empirical and theoretical evidence shows that deepening neural networks can effectively improve their performance under suitable training settings. However, deepening the backbone of neural networks will inevitably and significantly increase computation and parameter size. To mitigate these problems, we propose a simple-yet-effective Recurrent Attention Strategy (RAS), which implicitly increases the depth of neural networks with lightweight attention modules by local parameter sharing. The extensive experiments on three widely-used benchmark datasets demonstrate that RAS can improve the performance of neural networks at a slight addition of parameter size and computation, performing favorably against other existing well-known attention modules.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset