The rapid advancements in large language models (LLMs) have presented
ch...
There are two types of approaches to solving cross-lingual transfer:
mul...
Existing research has shown that a multilingual pre-trained language mod...
Large Language Models (LLMs) have shown remarkable performance in variou...
Effectively utilizing LLMs for complex tasks is challenging, often invol...
Evaluating the general abilities of foundation models to tackle human-le...
Artificial Intelligence (AI) has made incredible progress recently. On t...
Recently multi-lingual pre-trained language models (PLM) such as mBERT a...
Recent research demonstrates the effectiveness of using pretrained langu...
Multilingual pre-trained language models, such as mBERT and XLM-R, have ...
Dense retrieval has achieved impressive advances in first-stage retrieva...
Cross-lingual pre-training has achieved great successes using monolingua...
In most machine learning tasks, we evaluate a model M on a given data
po...
Question Aware Open Information Extraction (Question aware Open IE) take...
Natural Questions is a new challenging machine reading comprehension
ben...
Multilingual pre-trained models could leverage the training data from a ...
In this paper, we introduce XGLUE, a new benchmark dataset to train
larg...
We present Unicoder, a universal language encoder that is insensitive to...