We introduce OpenFlamingo, a family of autoregressive vision-language mo...
Large language models are now tuned to align with the goals of their
cre...
Confidence calibration is central to providing accurate and interpretabl...
Evaluating the factuality of long-form text generated by large language
...
Large multimodal datasets have been instrumental in recent breakthroughs...
Models trained on one set of domains often suffer performance drops on u...
Despite a sea of interpretability methods that can produce plausible
exp...
Distribution shift occurs when the test distribution differs from the
tr...
Machine learning systems deployed in the wild are often trained on a sou...
Standard training via empirical risk minimization (ERM) can produce mode...
For machine learning systems to be reliable, we must understand their
pe...
Distribution shifts can cause significant degradation in a broad range o...
Selective classification, in which models are allowed to abstain on unce...
We seek to learn models that we can interact with using high-level conce...
We study why overparameterization – increasing model size well beyond th...
Suppose we want to specify the inductive bias that married couples typic...
With the recent wave of progress in artificial intelligence (AI) has com...
Overparameterized neural networks can be highly accurate on average on a...
Learning representations that accurately capture long-range dependencies...
Influence functions estimate the effect of removing particular training
...
Machine learning models trained on data from the outside world can be
co...
Modeling how individuals evolve over time is a fundamental problem in th...
How can we explain the predictions of a black-box model? In this paper, ...