How Much Does Tokenization Affect in Neural Machine Translation?

12/20/2018
by   Miguel Domingo, et al.
0

Tokenization or segmentation is a wide concept that covers simple processes such as separating punctuation from words, or more sophisticated processes such as applying morphological knowledge. Neural Machine Translation (NMT) requires a limited-size vocabulary for computational cost and enough examples to well estimate word embeddings. Separating punctuation and splitting tokens into words or subwords has been shown helpful to reduce vocabulary and increase the number of examples of each word, improving the translation quality. Tokenization is more challenging when dealing with languages with no separator between words. In order to assess the impact of the tokenization in the quality of the final translation on NMT, we experimented on five tokenizers over ten language pairs. We reached the conclusions that the tokenization significantly affects the final translation quality and for different language pairs, the best tokenizer differs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset