Autoregressive language models (LMs) map token sequences to probabilitie...
Pre-trained language models and other generative models have revolutioni...
Aligning language models with preferences can be posed as approximating ...
The availability of large pre-trained models is changing the landscape o...
Energy-Based Models (EBMs) allow for extremely flexible specifications o...
Machine learning is shifting towards general-purpose pretrained generati...
Neural language models can be successfully trained on source code, leadi...
We propose a Distributional Approach to address Controlled Text Generati...
Global Autoregressive Models (GAMs) are a recent proposal [Parshakova et...
Character-based translation has several appealing advantages, but its
pe...
We share a French-English parallel corpus of Foursquare restaurant revie...
Standard autoregressive seq2seq models are easily trained by max-likelih...
In previous works, neural sequence models have been shown to improve
sig...
This paper describes our submission to the E2E NLG Challenge. Recently,
...
Seq2seq models based on Recurrent Neural Networks (RNNs) have recently
r...
We introduce LL-RNNs (Log-Linear RNNs), an extension of Recurrent Neural...
We introduce an LSTM-based method for dynamically integrating several
wo...
We propose an approach for helping agents compose email replies to custo...
Most current sampling algorithms for high-dimensional distributions are ...