Polyglot Contextual Representations Improve Crosslingual Transfer

02/26/2019
by   Phoebe Mulcaire, et al.
0

We introduce a method to produce multilingual contextual word representations by training a single language model on text from multiple languages. Our method combines the advantages of contextual word representations with those of multilingual representation learning. We produce language models from dissimilar language pairs (English/Arabic and English/Chinese) and use them in dependency parsing, semantic role labeling, and named entity recognition, with comparisons to monolingual and non-contextual variants. Our results provide further support for polyglot learning, in which representations are shared across multiple languages.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset