While pre-trained language models achieve impressive performance on vari...
Many Natural Language Processing (NLP) systems use annotated corpora for...
The evaluation of recent embedding-based evaluation metrics for text
gen...
Activation functions can have a significant impact on reducing the
topol...
Anaphoric reference is an aspect of language interpretation covering a
v...
State-of-the-art pretrained language models tend to perform below their
...
Neural abstractive summarization models are prone to generate summaries ...
State-of-the-art pretrained NLP models contain a hundred million to tril...
Recent prompt-based approaches allow pretrained language models to achie...
In this paper, we introduce SciGen, a new challenge dataset for the task...
The state-of-the-art on basic, single-antecedent anaphora has greatly
im...
The ability to reason about multiple references to a given entity is
ess...
Now that the performance of coreference resolvers on the simpler forms o...
Existing NLP datasets contain various biases, and models tend to quickly...
Existing NLP datasets contain various biases that models can easily expl...
NLU models often exploit biases to achieve high dataset-specific perform...
Models for natural language understanding (NLU) tasks often rely on the
...
Supervised training of neural models to duplicate question detection in
...
The task of natural language inference (NLI) is to identify the relation...
The common practice in coreference resolution is to identify and evaluat...
We introduce an efficient algorithm for mining informative combinations ...
Selectional preferences have long been claimed to be essential for
coref...
Lexical features are a major source of information in state-of-the-art
c...
Only a year ago, all state-of-the-art coreference resolvers were using a...