Understanding the speaker's intended meaning often involves drawing
comm...
Noun compound interpretation is the task of expressing a noun compound (...
Event coreference models cluster event mentions pertaining to the same
r...
There has been a growing interest in solving Visual Question Answering (...
Pre-trained language models learn socially harmful biases from their tra...
Figurative language is ubiquitous in English. Yet, the vast majority of ...
The black-box nature of neural models has motivated a line of research t...
Social norms—the unspoken commonsense rules about acceptable social
beha...
Abductive and counterfactual reasoning, core abilities of everyday human...
Human understanding of narrative texts requires making commonsense infer...
We study the potential synergy between two different NLP tasks, both
con...
Natural language understanding involves reading between the lines with
i...
Pre-trained language models (LMs) may perpetuate biases originating in t...
Phenomenon-specific "adversarial" datasets have been recently designed t...
Building meaningful representations of noun compounds is not trivial sin...
Recognizing coreferring events and entities across multiple texts is cru...
Building meaningful phrase representations is challenging because phrase...
Generative Adversarial Networks (GANs) are a promising approach for text...
Revealing the implicit semantic relation between the constituents of a
n...
We create a new NLI test set that shows the deficiency of state-of-the-a...
Supervised distributional methods are applied successfully in lexical
en...
Automatic interpretation of the relation between the constituents of a n...
The fundamental role of hypernymy in NLP has motivated the development o...
We present a submission to the CogALex 2016 shared task on the corpus-ba...
Recognizing various semantic relations between terms is beneficial for m...
Detecting hypernymy relations is a key task in NLP, which is addressed i...