ReZero is All You Need: Fast Convergence at Large Depth

03/10/2020
by   Thomas Bachlechner, et al.
0

Deep networks have enabled significant performance gains across domains, but they often suffer from vanishing/exploding gradients. This is especially true for Transformer architectures where depth beyond 12 layers is difficult to train without large datasets and computational budgets. In general, we find that inefficient signal propagation impedes learning in deep networks. In Transformers, multi-head self-attention is the main cause of this poor signal propagation. To facilitate deep signal propagation, we propose ReZero, a simple change to the architecture that initializes an arbitrary layer as the identity map, using a single additional learned parameter per layer. We apply this technique to language modeling and find that we can easily train ReZero-Transformer networks over a hundred layers. When applied to 12 layer Transformers, ReZero converges 56 Transformers to other residual networks, enabling 1,500 deep fully connected networks and 32 trained on CIFAR 10.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset