In this work, we develop and release Llama 2, a collection of pretrained...
Large language models are trained in two stages: (1) unsupervised pretra...
Prompt tuning is one of the successful approaches for parameter-efficien...
Masked Language Modeling (MLM) has been one of the most prominent approa...
We introduce Progressive Prompts - a simple and efficient approach for
c...
Large multilingual language models typically rely on a single vocabulary...
Scientific extreme summarization (TLDR) aims to form ultra-short summari...
Conventional fine-tuning of pre-trained language models tunes all model
...
Recently, pre-trained language models (PLMs) have dominated conditional ...
Current open-domain question answering (QA) systems often follow a
Retri...
Summaries generated by abstractive summarization are supposed to only co...
While neural sequence learning methods have made significant progress in...
Conventional sparse retrieval methods such as TF-IDF and BM25 are simple...
Can one build a knowledge graph (KG) for all products in the world? Know...
Taxonomies have found wide applications in various domains, especially o...
Walk-based models have shown their unique advantages in knowledge graph ...
of news articles are published online every day, which can be
overwhelm...
While existing hierarchical text classification (HTC) methods attempt to...
Commonly adopted metrics for extractive text summarization like ROUGE fo...
We present a novel end-to-end reinforcement learning approach to automat...