Large Language Models Can Self-Improve

10/20/2022
by   Jiaxin Huang, et al.
1

Large Language Models (LLMs) have achieved excellent performances in various tasks. However, fine-tuning an LLM requires extensive supervision. Human, on the other hand, may improve their reasoning abilities by self-thinking without external inputs. In this work, we demonstrate that an LLM is also capable of self-improving with only unlabeled datasets. We use a pre-trained LLM to generate "high-confidence" rationale-augmented answers for unlabeled questions using Chain-of-Thought prompting and self-consistency, and fine-tune the LLM using those self-generated solutions as target outputs. We show that our approach improves the general reasoning ability of a 540B-parameter LLM (74.4 63.4 without any ground truth label. We conduct ablation studies and show that fine-tuning on reasoning is critical for self-improvement.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset