Pretrained multilingual large language models have typically used heuris...
Current image generation models struggle to reliably produce well-formed...
We present FRMT, a new dataset and evaluation benchmark for Few-shot
Reg...
Large language models such as GPT-3 (Brown et al., 2020) can perform
arb...
Parameter-efficient methods are able to use a single frozen pre-trained ...
In this paper, we explore the challenging problem of performing a genera...
As pre-trained language models have gotten larger, there has been growin...
We provide the first exploration of text-to-text transformers (T5) sente...
In this work, we take the first steps towards building a universal rewri...
Recently, mT5 - a massively multilingual version of T5 - leveraged a uni...
Most widely-used pre-trained language models operate on sequences of tok...
In this work, we explore "prompt tuning", a simple yet effective mechani...
Machine learning has brought striking advances in multilingual natural
l...
We propose a straightforward vocabulary adaptation scheme to extend the
...
We propose a simple method to generate large amounts of multilingual que...
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified
...
We present a novel approach to the problem of text style transfer. Unlik...
Retrieval question answering (ReQA) is the task of retrieving a
sentence...
We present LAReQA, a challenging new benchmark for language-agnostic ans...
Purely character-based language models (LMs) have been lagging in qualit...
Popular QA benchmarks like SQuAD have driven progress on the task of
ide...
We introduce two pre-trained retrieval focused multilingual sentence enc...
LSTMs and other RNN variants have shown strong performance on character-...
We present a novel approach to learn representations for sentence-level
...
We present models for encoding sentences into embedding vectors that
spe...