Recent neural news recommenders (NNR) extend content-based recommendatio...
Modular vision-language models (Vision-LLMs) align pretrained image enco...
Vision-and-language (VL) models with separate encoders for each modality...
Massively multilingual language models have displayed strong performance...
Event detection is a crucial information extraction task in many domains...
Massively multilingual pretrained transformers (MMTs) have tremendously
...
Multilingual language models have pushed state-of-the-art in cross-lingu...
The advent of personalized news recommendation has given rise to increas...
Demographic factors (e.g., gender or age) shape our language. Previous w...
Few-shot Intent Classification (FSIC) is one of the key challenges in mo...
Sociodemographic factors (e.g., gender or age) shape our language. Previ...
While pretrained language models (PLMs) primarily serve as general purpo...
This paper introduces our proposed system for the MIA Shared Task on
Cro...
Research on (multi-domain) task-oriented dialog (TOD) has predominantly
...
State-of-the-art neural (re)rankers are notoriously data hungry which - ...
Geographic linguistic features are commonly used to improve the performa...
In this work we present a systematic empirical study focused on the
suit...
Recent work has shown that self-supervised dialog-specific pretraining o...
Open Information Extraction (OIE) is the task of extracting facts from
s...
Intrinsic evaluations of OIE systems are carried out either manually – w...
Unfair stereotypical biases (e.g., gender, racial, or religious biases)
...
We analyze bias in historical corpora as encoded in diachronic distribut...
Despite extensive research in the past years, the computational modeling...
Text representation models are prone to exhibit a range of societal bias...
Despite the fact that natural language conversations with machines repre...
Recent research efforts in NLP have demonstrated that distributional wor...
Pretrained multilingual text encoders based on neural Transformer
archit...
In parallel to their overwhelming success across NLP tasks, language abi...
Adapter modules, additional trainable parameters that enable efficient
f...
Recent work has shown that distributional word vector spaces often encod...
The success of large pretrained language models (LMs) such as BERT and
R...
Traditional NLP has long held (supervised) syntactic parsing necessary f...
Following the major success of neural language models (LMs) such as BERT...
Massively multilingual transformers pretrained with language modeling
ob...
In order to simulate human language capacity, natural language processin...
Current methods of cross-lingual parser transfer focus on predicting the...
Neural summarization models suffer from the fixed-size input limitation:...
Breaking down the structure of long texts into semantically coherent seg...
Distributional word vectors have recently been shown to encode many of t...
Unsupervised pretraining models have been shown to facilitate a wide ran...
Recent efforts in cross-lingual word embedding (CLWE) learning have
pred...
Word embeddings have recently been shown to reflect many of the pronounc...
During the last fifteen years, text scaling approaches have become a cen...
Cross-lingual word embeddings (CLEs) enable multilingual modeling of mea...
Semantic specialization is the process of fine-tuning pre-trained
distri...
Word vector specialisation (also known as retrofitting) is a portable,
l...
We propose a fully unsupervised framework for ad-hoc cross-lingual
infor...
Recognizing semantically similar sentences or paragraphs across language...