Previous studies have proved that cross-lingual knowledge distillation c...
In this work, we present a fully self-supervised framework for semantic
...
In this paper, we pay attention to the issue which is usually overlooked...
With the increase in the number of image data and the lack of correspond...
Encoder pre-training is promising in end-to-end Speech Translation (ST),...
Large amounts of data has made neural machine translation (NMT) a big su...
This paper proposes a new pre-training method, called Code-Switching
Pre...
Pre-trained language models like BERT have proven to be highly performan...
Chinese keyword spotting is a challenging task as there is no visual bla...
Pre-trained language representation models, such as BERT, capture a gene...
Existing works, including ELMO and BERT, have revealed the importance of...
In this paper, we propose a novel conditional generative adversarial net...