Recent pre-trained language models (PLMs) achieve promising results in
e...
ChatGPT, a large-scale language model based on the advanced GPT-3.5
arch...
Transfer learning is a simple and powerful method that can be used to bo...
In this report, we present our submission to the WMT 2022 Metrics Shared...
Attention mechanism has become the dominant module in natural language
p...
In this paper, we present our submission to Shared Metrics Task: RoBLEUR...
Translation quality evaluation plays a crucial role in machine translati...
We release 70 small and discriminative test sets for machine translation...
Pre-training (PT) and back-translation (BT) are two simple and powerful
...
The high-quality translation results produced by machine translation (MT...
Previous studies have shown that initializing neural machine translation...
Meta-learning has been sufficiently validated to be beneficial for
low-r...
Encoder layer fusion (EncoderFusion) is a technique to fuse all the enco...
Recent studies have proven that the training of neural machine translati...
A neural machine translation (NMT) system is expensive to train, especia...
As a special machine translation task, dialect translation has two main
...
Word embedding is central to neural machine translation (NMT), which has...
Transformer is the state-of-the-art model in recent machine translation
...
Self-attention networks (SAN) have attracted a lot of interests due to t...
Self-attention networks (SANs) have drawn increasing interest due to the...
Self-attention model have shown its flexibility in parallel computation ...
Self-attention network (SAN) has recently attracted increasing interest ...
Self-attention networks have proven to be of profound value for its stre...
This paper proposes a hierarchical attentional neural translation model ...