Recent language models have made tremendous progress in the structured d...
Unavailability of parallel corpora for training text style transfer (TST...
As neural-network-based QA models become deeper and more complex, there ...
Multi-headed attention heads are a mainstay in transformer-based models....
BERT and its variants have achieved state-of-the-art performance in vari...
The self-attention module is a key component of Transformer-based models...
Recent studies on interpretability of attention distributions have led t...
In this work, we focus on the task of Automatic Question Generation (AQG...
When humans learn to perform a difficult task (say, reading comprehensio...
The task of Reading Comprehension with Multiple Choice Questions, requir...
There has always been criticism for using n-gram based similarity metric...
Structured data summarization involves generation of natural language
su...
In this work, we focus on the task of generating natural language
descri...
Abstractive summarization aims to generate a shorter version of the docu...