Explicitly Modeling Adaptive Depths for Transformer

04/27/2020
by   Yijin Liu, et al.
0

The vanilla Transformer conducts a fixed number of computations over all words in a sentence, irrespective of whether they are easy or difficult to learn. In terms of both computational efficiency and ease of learning, it is preferable to dynamically vary the numbers of computations according to the hardness of the input words. However, how to find a suitable estimation for such hardness, then explicitly modeling adaptive computation depths are still not investigated. In this paper, we try to solve this issue, and propose two effective approaches, namely 1) mutual information based estimation and 2) reconstruction loss based estimation, to measure the hardness of learning the representation for a word and determine its computational depth. Results on the classic text classification task (24 datasets in various sizes and domains) show that our approaches achieve superior performance while preserving higher efficiency in computation over the vanilla Transformer and previous depth-adaptive models. More importantly, our approaches lead to more robust depth-adaptive Transformer models with better interpretability of the depth distribution.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset