Auto-Learning: An Adversarial Process of Two Pre-trained Models for Natural Language Generation
Pre-trained models have been used in many fields in recent years, ranging from natural language understanding to computer vision and natural language generation. Nowadays, the performance of these natural language generation models is overly dependent on the model's scale and the dataset's size. While the larger language model is excellent in some respects, it cannot learn up-to-date knowledge and is relatively difficult to relearn. In this paper, a new adversarial process learning method is called Auto-Learning, which can improve the performance of any natural language generation model without the help of additional datasets. Auto-Learning includes two models: G is a text generation model, and D can test whether the data generated by G is legitimate. Firstly, the fine-tuned D model is used as the brain's knowledge base before the process. Then the text generated by the G model is used as the input of D to determine whether the text is legitimate. Finally, G is fine-tuned according to the output of D. This adversarial process is like a self-escalation of the brain through some a priori knowledge. When this adversarial system wants to learn something new, simply fine-tune the D model. Our approach applies to Autoregressive Language Modeling for all Transformer classes. Auto-Learning enables 8 models to achieve stable improvement in 10 natural language processing tasks without any change in structure.
READ FULL TEXT