Most research about natural language generation (NLG) relies on evaluati...
Training models with varying capacities can be advantageous for deployin...
Large language models (LLMs) demonstrate impressive multilingual capabil...
Large language models (LLMs) have shown surprisingly good performance in...
Context-aware neural machine translation aims to use the document-level
...
Pre-trained models have achieved remarkable success in natural language
...
Despite the current success of multilingual pre-training, most prior wor...
Multilingual machine translation has been proven an effective strategy t...
Transformer structure, stacked by a sequence of encoder and decoder netw...
Language guided image inpainting aims to fill in the defective regions o...
Existing document-level neural machine translation (NMT) models have
suf...
This report describes Microsoft's machine translation systems for the WM...
Multilingual machine translation enables a single model to translate bet...
This paper presents a Multitask Multilingual Multimodal Pre-trained mode...
While many BERT-based cross-modal pre-trained models produce excellent
r...
We propose UniViLM: a Unified Video and Language pre-training Model for
...
We present Unicoder, a universal language encoder that is insensitive to...