Combining Pretrained High-Resource Embeddings and Subword Representations for Low-Resource Languages

03/09/2020
by   Machel Reid, et al.
0

The contrast between the need for large amounts of data for current Natural Language Processing (NLP) techniques, and the lack thereof, is accentuated in the case of African languages, most of which are considered low-resource. To help circumvent this issue, we explore techniques exploiting the qualities of morphologically rich languages (MRLs), while leveraging pretrained word vectors in well-resourced languages. In our exploration, we show that a meta-embedding approach combining both pretrained and morphologically-informed word embeddings performs best in the downstream task of Xhosa-English translation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset