UniCase – Rethinking Casing in Language Models

10/22/2020
by   Rafał Powalski, et al.
0

In this paper, we introduce a new approach to dealing with the problem of case-sensitiveness in Language Modelling (LM). We propose simple architecture modification to the RoBERTa language model, accompanied by a new tokenization strategy, which we named Unified Case LM (UniCase). We tested our solution on the GLUE benchmark, which led to increased performance by 0.42 points. Moreover, we prove that the UniCase model works much better when we have to deal with text data, where all tokens are uppercased (+5.88 point).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset