In-context learning (ICL), teaching a large language model (LLM) to perf...
Multi-vector retrieval models such as ColBERT [Khattab and Zaharia, 2020...
Multi-vector retrieval models improve over single-vector dual encoders o...
Language models (LMs) now excel at many tasks such as few-shot learning,...
Much recent research on information retrieval has focused on how to tran...
Many important questions (e.g. "How to eat healthier?") require conversa...
It has been shown that dual encoders trained on one domain often fail to...
Classical information retrieval systems such as BM25 rely on exact lexic...
Pre-trained deep language models (LM) have advanced the state-of-the-art...
Most research on pseudo relevance feedback (PRF) has been done in vector...
Neural rankers based on deep pretrained language models (LMs) have been ...
Deep language models such as BERT pre-trained on large corpus have given...
Tabular data provide answers to a significant portion of search queries....
Information retrieval traditionally has relied on lexical matching signa...
Recent innovations in Transformer-based ranking models have advanced the...
Term frequency is a common method for identifying the importance of a te...
Neural networks provide new possibilities to automatically learn complex...
In this work we propose a method that geolocates videos within a delimit...
This paper studies the consistency of the kernel-based neural ranking mo...
This paper proposes K-NRM, a kernel based neural model for document rank...