Visualizing Attention in Transformer-Based Language models

04/04/2019
by   Jesse Vig, et al.
0

We present an open-source tool for visualizing multi-head self-attention in Transformer-based language models. The tool extends earlier work by visualizing attention at three levels of granularity: the attention-head level, the model level, and the neuron level. We describe how each of these views can help to interpret the model, and we demonstrate the tool on the OpenAI GPT-2 pretrained language model. We also present three use cases showing how the tool might provide insights on how to adapt or improve the model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset