One Size Does Not Fit All: Finding the Optimal N-gram Sizes for FastText Models across Languages

02/04/2021
by   Vít Novotný, et al.
0

Unsupervised word representation learning from large corpora is badly needed for downstream tasks such as text classification, information retrieval, and machine translation. The representation precision of the fastText language models is mostly due to their use of subword information. In previous work, the optimization of fastText subword sizes has been largely neglected, and non-English fastText language models were trained using subword sizes optimized for English and German. In our work, we train English, German, Czech, and Italian fastText language models on Wikipedia, and we optimize the subword sizes on the English, German, Czech, and Italian word analogy tasks. We show that the optimization of subword sizes results in a 5 that computationally expensive hyperparameter optimization can be replaced with cheap n-gram frequency analysis: subword sizes that are the closest to covering 3.76 fastText hyperparameters on the English, German, Czech, and Italian word analogy tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset