Meta-in-context learning in large language models

05/22/2023
by   Julian Coda-Forno, et al.
0

Large language models have shown tremendous performance in a variety of tasks. In-context learning – the ability to improve at a task after being provided with a number of demonstrations – is seen as one of the main contributors to their success. In the present paper, we demonstrate that the in-context learning abilities of large language models can be recursively improved via in-context learning itself. We coin this phenomenon meta-in-context learning. Looking at two idealized domains, a one-dimensional regression task and a two-armed bandit task, we show that meta-in-context learning adaptively reshapes a large language model's priors over expected tasks. Furthermore, we find that meta-in-context learning modifies the in-context learning strategies of such models. Finally, we extend our approach to a benchmark of real-world regression problems where we observe competitive performance to traditional learning algorithms. Taken together, our work improves our understanding of in-context learning and paves the way toward adapting large language models to the environment they are applied purely through meta-in-context learning rather than traditional finetuning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset