What do language models (LMs) do with language? Everyone agrees that the...
Artificial neural networks can generalize productively to novel contexts...
Accurate syntactic representations are essential for robust generalizati...
When acquiring syntax, children consistently choose hierarchical rules o...
Human linguistic capacity is often characterized by compositionality and...
Structural probing work has found evidence for latent syntactic informat...
When a language model is trained to predict natural language sequences, ...
Humans exhibit garden path effects: When reading sentences that are
temp...
Language models are often trained on text alone, without additional
grou...
Understanding longer narratives or participating in conversations requir...
Relations between words are governed by hierarchical structure rather th...
Generic unstructured neural networks have been shown to struggle on
out-...
Current language models can generate high-quality text. Are they simply
...
Neural network models often generalize poorly to mismatched domains or
d...
Temporary syntactic ambiguities arise when the beginning of a sentence i...
Pre-trained language models perform well on a variety of linguistic task...
Understanding language requires grasping not only the overtly stated con...
Experiments with pretrained models such as BERT are often based on a sin...
Targeted syntactic evaluations have demonstrated the ability of language...
When language models process syntactically complex sentences, do they us...
Knowledge-grounded dialogue agents are systems designed to conduct a
con...
Many crowdsourced NLP datasets contain systematic gaps and biases that a...
Natural language is characterized by compositionality: the meaning of a
...
How do learners acquire languages from the limited data available to the...
This position paper describes and critiques the Pretraining-Agnostic
Ide...
A range of studies have concluded that neural word prediction models can...
Sequence-based neural networks show significant sensitivity to syntactic...
Pretrained neural models such as BERT, when fine-tuned to perform natura...
Modern deep neural networks achieve impressive performance in engineerin...
Learners that are exposed to the same training data might generalize
dif...
If the same neural architecture is trained multiple times on the same
da...
Neural networks (NNs) are able to perform tasks that rely on composition...
Neural language models (LMs) perform well on tasks that require sensitiv...
Recurrent neural networks can learn to predict upcoming words remarkably...
We introduce a set of nine challenge tasks that test for the understandi...
The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techn...
How do typological properties such as word order and morphological case
...
Machine learning systems can often achieve high performance on a test se...
People learn in fast and flexible ways that have not been emulated by
ma...
Recurrent neural networks (RNNs) can learn continuous vector representat...
Neural network models have shown great success at natural language infer...
Human reading behavior is sensitive to surprisal: more predictable words...
Joe Pater's target article calls for greater interaction between neural
...
It has been argued that humans rapidly adapt their lexical and syntactic...
We present a dataset for evaluating the grammaticality of the prediction...
Determining the correct form of a verb in context requires an understand...
Recurrent neural networks (RNNs) have achieved impressive results in a
v...
Syntactic rules in human language usually refer to the hierarchical stru...
Spoken word recognition involves at least two basic computations. First ...
Recent work has explored the syntactic abilities of RNNs using the
subje...