The full power of human language-based communication cannot be realized
...
Rationalization is fundamental to human reasoning and learning. NLP mode...
Integrating vision and language has gained notable attention following t...
Self-rationalization models that predict task labels and generate free-t...
An attention matrix of a transformer self-attention sublayer can provabl...
As language models are trained on ever more text, researchers are turnin...
Explainable NLP (ExNLP) has increasingly focused on collecting
human-ann...
Generating text from structured inputs, such as meaning representations ...
Humans give contrastive explanations that explain why an observed event
...
Interpretable NLP has taking increasing interest in ensuring that
explan...
Natural language rationales could provide intuitive, higher-level
explan...
Trust is a central component of the interaction between people and AI, i...
High-quality and large-scale data are key to success for AI systems. How...
Language models pretrained on text from a wide variety of sources form t...
Machine comprehension of texts longer than a single sentence often requi...
For over 12 years, machine learning is used to extract opinion-holder-ta...
Resolving abstract anaphora is an important, but difficult task for text...
Modal sense classification (MSC) is a special WSD task that depends on t...