Low Resource Text Classification with ULMFit and Backtranslation

03/21/2019
by   Sam Shleifer, et al.
0

In computer vision, virtually every state of the art deep learning system is trained with data augmentation. In text classification, however, data augmentation is less widely practiced because it must be performed before training and risks introducing label noise. We augment the IMDB movie reviews dataset with examples generated by two families of techniques: random token perturbations introduced by Wei and Zou [2019] and backtranslation -- translating to a second language then back to English. In low resource environments, backtranslation generates significant improvement on top of the state-of-the-art ULMFit model. A ULMFit model pretrained on wikitext103 and then finetuned on only 50 IMDB examples and 500 synthetic examples generated by backtranslation achieves 80.6% accuracy, an 8.1% improvement over the augmentation-free baseline with only 9 minutes of additional training time. Random token perturbations do not yield any improvements but incur equivalent computational cost. The benefits of training with backtranslated examples decreases with the size of the available training data. On the full dataset, neither augmentation technique improves upon ULMFit's state of the art performance. We address this by using backtranslations as a form of test time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset