LEPUS: Prompt-based Unsupervised Multi-hop Reranking for Open-domain QA
We study unsupervised multi-hop reranking for multi-hop QA (MQA) with open-domain questions. Since MQA requires piecing information from multiple documents, the main challenge thus resides in retrieving and reranking chains of passages that support the reasoning process. Our approach relies on LargE models with Prompt-Utilizing reranking Strategy (LEPUS): we construct an instruction-like prompt based on a candidate document path and compute a relevance score of the path as the probability of generating a given question, according to a pre-trained language model. Though unsupervised, LEPUS yields competitive reranking performance against state-of-the-art methods that are trained on thousands of examples. Adding a small number of samples (e.g., 2), we demonstrate further performance gain using in-context learning. Finally, we show that when integrated with a reader module, LEPUS can obtain competitive multi-hop QA performance, e.g., outperforming fully-supervised QA systems. Code will be released at https://github.com/mukhal/LEPUS
READ FULL TEXT