Reinforcement learning from human feedback (RLHF) has emerged as a relia...
Artificial General Intelligence (AGI) requires comprehensive understandi...
The convergence of text, visual, and audio data is a key step towards
hu...
We present Composable Diffusion (CoDi), a novel generative model capable...
Modern software systems heavily rely on external libraries developed by
...
Large Language Models (LLMs) have shown impressive performance as genera...
Logical reasoning of text is an important ability that requires understa...
This paper focuses on analyzing and improving the commonsense ability of...
Answering open-domain questions requires world knowledge about in-contex...
A common thread of retrieval-augmented methods in the existing literatur...
Contrastive Learning has recently achieved state-of-the-art performance ...
Entities, as important carriers of real-world knowledge, play a key role...
Controllable text generation systems often leverage control codes to dir...
Knowledge-intensive tasks, such as open-domain question answering (QA),
...
This paper revisits visual representation in knowledge-based visual ques...
Despite the successes of neural attention models for natural language
ge...
The goal of this work is to build flexible video-language models that ca...
Semi-supervised learning has shown promise in allowing NLP models to
gen...
Human intelligence is multimodal; we integrate visual, linguistic, and
a...
Recent development of large-scale pre-trained language models (PLM) have...
Pre-trained language models are still far from human performance in task...
Generative commonsense reasoning (GCR) in natural language is to reason ...
Automatic machine learning, or AutoML, holds the promise of truly
democr...
Vision-language (V+L) pretraining models have achieved great success in
...
We initiate the first empirical study on the use of MLP architectures fo...
Most of today's AI systems focus on using self-attention mechanisms and
...
Vision-and-language (VL) pre-training has proven to be highly effective ...
In this paper we explore the use of symbolic knowledge and machine teach...
Commonsense reasoning (CSR) requires the model to be equipped with gener...
Pre-trained language models (PLMs) aim to learn universal language
repre...
Current Open-Domain Question Answering (ODQA) model paradigm often conta...
It is often observed in knowledge-centric tasks (e.g., common sense ques...
Generating paragraphs of diverse contents is important in many applicati...
For task-oriented dialog systems to be maximally useful, it must be able...
Commonsense reasoning requires a model to make presumptions about world
...
Cross-lingual Summarization (CLS) aims at producing a summary in the tar...
The goal of text generation is to make machines express in human languag...
Spoken language understanding (SLU) requires a model to analyze input
ac...
Knowledge graphs (KGs) contain rich information about world knowledge,
e...
Recent successes in deep generative modeling have led to significant adv...
Neural models have become successful at producing abstractive summaries ...
Due to widespread interest in machine translation and transfer learning,...
Dialog policy determines the next-step actions for agents and hence is
c...
The natural language generation (NLG) module in a task-oriented dialogue...
The training of spoken language understanding (SLU) models often faces t...
With the abundance of automatic meeting transcripts, meeting summarizati...
A commonly observed problem with abstractive summarization is the distor...
A commonly observed problem with abstractive summarization is the distor...
As a crucial component in task-oriented dialog systems, the Natural Lang...
Text summarization aims to extract essential information from a piece of...