CMV-BERT: Contrastive multi-vocab pretraining of BERT

12/29/2020
by   Wei Zhu, et al.
0

In this work, we represent CMV-BERT, which improves the pretraining of a language model via two ingredients: (a) contrastive learning, which is well studied in the area of computer vision; (b) multiple vocabularies, one of which is fine-grained and the other is coarse-grained. The two methods both provide different views of an original sentence, and both are shown to be beneficial. Downstream tasks demonstrate our proposed CMV-BERT are effective in improving the pretrained language models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset