Multilingual Constituency Parsing with Self-Attention and Pre-Training

12/31/2018
by   Nikita Kitaev, et al.
0

We extend our previous work on constituency parsing (Kitaev and Klein, 2018) by incorporating pre-training for ten additional languages, and compare the benefits of no pre-training, ELMo (Peters et al., 2018), and BERT (Devlin et al., 2018). Pre-training is effective across all languages evaluated, and BERT outperforms ELMo in large part due to the benefits of increased model capacity. Our parser obtains new state-of-the-art results for 11 languages, including English (95.8 F1) and Chinese (91.8 F1).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset