Generation of plausible yet incorrect factual information, termed
halluc...
As large language models improve, there is increasing interest in techni...
We propose Reinforcement Learning from Contrast Distillation (RLCD), a m...
In-context learning (ICL) improves language models' performance on a var...
Recent advances in open-domain text generation models powered by large
p...
Pretrained model-based evaluation metrics have demonstrated strong
perfo...
Given a prefix (context), open-ended generation aims to decode texts tha...
In this paper, we conduct a thorough investigation into the reasoning
ca...
Recent studies on transformer-based language models show that they can a...
In recent years, large pre-trained language models (LLMs) have demonstra...
This survey reviews works in which language models (LMs) are augmented w...
Abstractive dialogue summarization has long been viewed as an important
...
Recent work has shown that fine-tuning large pre-trained language models...
Lack of factual correctness is an issue that still plagues state-of-the-...
Prompting large language models has enabled significant recent progress ...
Current large language models can perform reasonably well on complex tas...
Large language models show improved downstream task performance when pro...
Large language models (LLMs) have exhibited remarkable capabilities in
l...
Abstractive summarization models typically generate content unfaithful t...
Machine translation has seen rapid progress with the advent of
Transform...
Hate speech detection is complex; it relies on commonsense reasoning,
kn...
Recently, there has been a surge of interest in the NLP community on the...
Factual inconsistencies in generated summaries severely limit the practi...
Recent neural models that extend the pretrain-then-finetune paradigm con...
Current efficient fine-tuning methods (e.g., adapters, prefix-tuning, et...
Do language models have beliefs about the world? Dennett (1995) famously...
Current language models can generate high-quality text. Are they simply
...
Recent years have brought about an interest in the challenging task of
s...
Abstractive summarization, the task of generating a concise summary of i...
The progress in Query-focused Multi-Document Summarization (QMDS) has be...
Neuro-symbolic representations have proved effective in learning structu...
Text generation models can generate factually inconsistent text containi...
Existing language models excel at writing from scratch, but many real-wo...
The paper surveys evaluation methods of natural language generation (NLG...
While recent state-of-the-art results for adversarial imitation-learning...
Many high-level procedural tasks can be decomposed into sequences of
ins...
Fluent communication requires understanding your audience. In the new
co...
We propose the task of outline-conditioned story generation: given an ou...
Redundancy-aware extractive summarization systems score the redundancy o...
Transformers have increasingly outperformed gated RNNs in obtaining new
...
Web search engines today return a ranked list of document links in respo...
Core to the vision-and-language navigation (VLN) challenge is building r...
We introduce Cooperative Generator-Discriminator Networks (Co-opNet), a
...
Vector representations of sentences, trained on massive text corpora, ar...
We present the first comprehensive study on automatic knowledge base
con...
Large-scale learning of transformer language models has yielded improvem...
Variational autoencoders (VAEs) with an auto-regressive decoder have bee...
Variational autoencoders (VAEs) have received much attention recently as...
Vision-language navigation (VLN) is the task of navigating an embodied a...
We propose a hierarchically structured reinforcement learning approach t...