When trying to gain better visibility into a machine learning model in o...
Large language models (LLMs) may not equitably represent diverse global
...
We test the hypothesis that language models trained with reinforcement
l...
As AI systems become more capable, we would like to enlist their help to...
Developing safe and useful general-purpose AI systems will require us to...
"Induction heads" are attention heads that implement a simple algorithm ...
Neural networks often pack many unrelated concepts into a single neuron ...
We describe our early efforts to red team language models in order to
si...
We study whether language models can evaluate the validity of their own
...
Recent large language models have been trained on vast datasets, but als...
We apply preference modeling and reinforcement learning from human feedb...
Large-scale pre-training has recently emerged as a technique for creatin...
Given the broad capabilities of large language models, it should be poss...
We introduce Codex, a GPT language model fine-tuned on publicly availabl...
We study empirical scaling laws for transfer learning between distributi...
We identify empirical scaling laws for the cross-entropy loss in four
do...
Recent work has demonstrated substantial gains on many NLP tasks and
ben...
We study empirical scaling laws for language model performance on the
cr...
In an increasing number of domains it has been demonstrated that deep
le...