Large-scale language models (LLMs), such as ChatGPT, are capable of
gene...
BEIR is a benchmark dataset for zero-shot evaluation of information retr...
The recent LLMs like GPT-4 and PaLM-2 have made tremendous progress in
s...
Supervised ranking methods based on bi-encoder or cross-encoder architec...
Question answering over knowledge bases is considered a difficult proble...
Anserini is a Lucene-based toolkit for reproducible information retrieva...
This paper introduces a method called Sparsified Late Interaction for
Mu...
While dense retrieval has been shown effective and efficient across task...
Recently, there has been significant progress in teaching language model...
Current pre-trained language model approaches to information retrieval c...
Dense retrieval models using a transformer-based bi-encoder design have
...
Recent rapid advancements in deep pre-trained language models and the
in...
Sparse lexical representation learning has demonstrated much progress in...
Pseudo-Relevance Feedback (PRF) utilises the relevance signals from the ...
In this paper, we present an approach for predicting trust links between...
We present Mr. TyDi, a multi-lingual benchmark dataset for mono-lingual
...
Recent developments in representational learning for information retriev...
Text retrieval using learned dense representations has recently emerged ...
Pyserini is an easy-to-use Python toolkit that supports replicable IR
re...
This work describes the adaptation of a pretrained sequence-to-sequence ...