Why is Attention Not So Attentive?

06/10/2020
by   Bing Bai, et al.
30

Attention-based methods have played an important role in model interpretations, where the calculated attention weights are expected to highlight the critical parts of inputs (e.g., keywords in sentences). However, some recent research points out that attention-as-importance interpretations often do not work as well as we expect. For example, learned attention weights are frequently uncorrelated with other feature importance indicators like gradient-based measures, and a debate on the effectiveness of attention-based interpretations has also raised. In this paper, we reveal that one root cause of this phenomenon can be ascribed to the combinatorial shortcuts, which stand for that the models may not only obtain information from the highlighted parts by attention mechanisms but from the attention weights themselves. We design one intuitive experiment to demonstrate the existence of combinatorial shortcuts and propose two methods to mitigate this issue. Empirical studies on attention-based instance-wise feature selection interpretation models are conducted, and the results show that the proposed methods can effectively improve the interpretability of attention mechanisms on a variety of datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset