Training data attribution (TDA) methods offer to trace a model's predict...
Neural language models (LMs) have been shown to memorize a great deal of...
Deep NLP models have been shown to learn spurious correlations, leaving ...
Experiments with pretrained models such as BERT are often based on a sin...
Pre-trained models have revolutionized natural language understanding.
H...
Pretrained Language Models (LMs) have been shown to possess significant
...
We present the Language Interpretability Tool (LIT), an open-source plat...
The success of pretrained contextual encoders, such as ELMo and BERT, ha...
While there has been much recent work studying how linguistic informatio...
We introduce jiant, an open source toolkit for conducting multitask and
...
Contextualized representation models such as ELMo (Peters et al., 2018a)...
Pre-trained text encoders have rapidly advanced the state of the art on ...
We introduce a set of nine challenge tasks that test for the understandi...
Work on the problem of contextualized word representation -- the develop...
We release a corpus of 43 million atomic edits across 8 languages. These...