From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions

06/05/2019
by   David Mareček, et al.
0

We inspect the multi-head self-attention in Transformer NMT encoders for three source languages, looking for patterns that could have a syntactic interpretation. In many of the attention heads, we frequently find sequences of consecutive states attending to the same position, which resemble syntactic phrases. We propose a transparent deterministic method of quantifying the amount of syntactic information present in the self-attentions, based on automatically building and evaluating phrase-structure trees from the phrase-like sequences. We compare the resulting trees to existing constituency treebanks, both manually and by computing precision and recall.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset