Morphosyntactic Tagging with Pre-trained Language Models for Arabic and its Dialects

10/13/2021
by   Go Inoue, et al.
0

We present state-of-the-art results on morphosyntactic tagging across different varieties of Arabic using fine-tuned pre-trained transformer language models. Our models consistently outperform existing systems in Modern Standard Arabic and all the Arabic dialects we study, achieving 2.6 improvement over the previous state-of-the-art in Modern Standard Arabic, 2.8 in Gulf, 1.6 setups for fine-tuning pre-trained transformer language models, including training data size, the use of external linguistic resources, and the use of annotated data from other dialects in a low-resource scenario. Our results show that strategic fine-tuning using datasets from other high-resource dialects is beneficial for a low-resource dialect. Additionally, we show that high-quality morphological analyzers as external linguistic resources are beneficial especially in low-resource settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset