Unigram-Normalized Perplexity as a Language Model Performance Measure with Different Vocabulary Sizes

11/26/2020
by   Jihyeon Roh, et al.
0

Although Perplexity is a widely used performance metric for language models, the values are highly dependent upon the number of words in the corpus and is useful to compare performance of the same corpus only. In this paper, we propose a new metric that can be used to evaluate language model performance with different vocabulary sizes. The proposed unigram-normalized Perplexity actually presents the performance improvement of the language models from that of simple unigram model, and is robust on the vocabulary size. Both theoretical analysis and computational experiments are reported.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset