Gradual Fine-Tuning for Low-Resource Domain Adaptation

03/03/2021
by   Haoran Xu, et al.
0

Fine-tuning is known to improve NLP models by adapting an initial model trained on more plentiful but less domain-salient examples to data in a target domain. Such domain adaptation is typically done using one stage of fine-tuning. We demonstrate that gradually fine-tuning in a multi-stage process can yield substantial further gains and can be applied without modifying the model or learning objective.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset