Individuals with complex communication needs (CCN) often rely on augment...
One common trend in recent studies of language models (LMs) is the use o...
Recent work has explored Large Language Models (LLMs) to overcome the la...
Developing a universal model that can efficiently and effectively respon...
As the capabilities of language models continue to advance, it is concei...
The advent of multilingual language models has generated a resurgence of...
The present study aims to explore the capabilities of Language Models (L...
This paper reports on a study of cross-lingual information retrieval (CL...
Recent work has shown that inducing a large language model (LLM) to gene...
Recently, InPars introduced a method to efficiently use large language m...
This paper proposes a question-answering system that can answer question...
Bi-encoders and cross-encoders are widely used in many state-of-the-art
...
The widespread availability of search API's (both free and commercial) b...
Robust 2004 is an information retrieval benchmark whose large number of
...
The zero-shot cross-lingual ability of models pretrained on multilingual...
The ability to extrapolate, i.e., to make predictions on sequences that ...
In this work we describe our submission to the product ranking task of t...
Recent work has shown that small distilled language models are strong
co...
Recent work has shown that language models scaled to billions of paramet...
The information retrieval community has recently witnessed a revolution ...
There has been mounting evidence that pretrained language models fine-tu...
A typical information extraction pipeline consists of token- or span-lev...
Pretrained multilingual models have become a de facto default approach f...
The MS MARCO ranking dataset has been widely used for training deep lear...
An effective method for cross-lingual transfer is to fine-tune a bilingu...
We describe our single submission to task 1 of COLIEE 2021. Our vanilla ...
The ability to perform arithmetic tasks is a remarkable trait of human
i...
Pyserini is an easy-to-use Python toolkit that supports replicable IR
re...
We propose a design pattern for tackling text ranking problems, dubbed
"...
This work describes the adaptation of a pretrained sequence-to-sequence ...
The goal of text ranking is to generate an ordered list of texts retriev...
What are the latent questions on some textual data? In this work, we
inv...
In natural language processing (NLP), there is a need for more resources...
Despite the widespread adoption of deep learning for machine translation...
We present Covidex, a search engine that exploits the latest neural rank...
Passage retrieval in a conversational context is essential for many
down...
We present CovidQA, the beginnings of a question answering dataset
speci...
We present the Neural Covidex, a search engine that exploits the latest
...
This paper presents an empirical study of conversational question
reform...
We applied the T5 sequence-to-sequence model to tackle the AI2 WinoGrand...
This work proposes a novel adaptation of a pretrained sequence-to-sequen...
In this work we propose a novel self-attention mechanism model to addres...
Citation recommendation systems for the scientific literature, to help
a...
We investigate a framework for machine reading, inspired by real world
i...
The advent of deep neural networks pre-trained via language modeling tas...
Recent advances in language representation using neural networks have ma...
A goal shared by artificial intelligence and information retrieval is to...
One technique to improve the retrieval effectiveness of a search engine ...
Recently, neural models pretrained on a language modeling task, such as ...
We propose a method to efficiently learn diverse strategies in reinforce...