Weakly supervised learning is a popular approach for training machine
le...
Few-shot fine-tuning and in-context learning are two alternative strateg...
Although masked language models are highly performant and widely adopted...
Large amounts of training data are one of the major reasons for the high...
Analyzing ethnic or religious bias is important for improving fairness,
...
Learning semantically meaningful sentence embeddings is an open problem ...
Multilingual pre-trained language models (PLMs) have demonstrated impres...
Recently neural network based approaches to knowledge-intensive NLP task...
Many NLP models gain performance by having access to a knowledge base. A...
Several variants of deep neural networks have been successfully employed...
Transformer-based language models achieve high performance on various ta...
Visual captioning aims to generate textual descriptions given images.
Tr...
Fine-tuning pre-trained contextualized embedding models has become an
in...
Generating longer textual sequences when conditioned on the visual
infor...
Fine-tuning pre-trained transformer-based language models such as BERT h...
The increase in computational power and available data has fueled a wide...
Recently, several logit regularization methods have been proposed in [Ka...