Pre-trained models for Czech Natural Language Processing are often evalu...
Previous work has shown that the representations output by contextual
la...
We study how multilingual sentence representations capture European coun...
In most Vision-Language models (VL), the understanding of the image stru...
We present Charles University submissions to the WMT22 General Translati...
Pre-trained multilingual language models (PMLMs) are commonly used when
...
Massively multilingual sentence representations are trained on large cor...
Static and contextual multilingual embeddings have complementary strengt...
We address two problems of domain adaptation in neural machine translati...
We present a literature and empirical survey that critically assesses th...
We propose the neural string edit distance model for string-pair
classif...
Applying the Transformer architecture on the character level usually req...
Multilingual contextual embeddings, such as multilingual BERT (mBERT) an...
Non-autoregressive (nAR) models for machine translation (MT) manifest
su...
Multilingual BERT (mBERT) provides sentence representations for 104
lang...
Recent literature shows that large-scale language modeling provides exce...
Filters of convolutional networks used in computer vision are often
visu...
We present our submission to the WMT19 Robustness Task. Our baseline sys...
In this paper, we study abstractive summarization for open-domain videos...
Autoregressive decoding is the only part of sequence-to-sequence models ...
In multi-source sequence-to-sequence tasks, the attention mechanism can ...
We present our submission to the WMT18 Multimodal Translation Task. The ...
In this paper, we describe our submissions to the WMT17 Multimodal
Trans...
Modeling attention in neural multi-source sequence-to-sequence learning
...
Neural sequence to sequence learning recently became a very promising
pa...