PhoBERT: Pre-trained language models for Vietnamese

03/02/2020
by   Dat Quoc Nguyen, et al.
0

We present PhoBERT with two versions of "base" and "large"–the first public large-scale monolingual language models pre-trained for Vietnamese. We show that PhoBERT improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Named-entity recognition and Natural language inference. We release PhoBERT to facilitate future research and downstream applications for Vietnamese NLP. Our PhoBERT is released at: https://github.com/VinAIResearch/PhoBERT

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset