The standard paradigm of neural language generation adopts maximum likel...
Despite the success of text-to-text pre-trained models in various natura...
Although Transformers with fully connected self-attentions are powerful ...
Despite the recent advances in applying pre-trained language models to
g...
Existing pre-trained models for knowledge-graph-to-text (KG-to-text)
gen...
Pre-trained Language Models (PLMs) have proven to be beneficial for vari...
Commonsense explanation generation aims to empower the machine's sense-m...
Despite the success of generative pre-trained language models on a serie...
Most of the existing pre-trained language representation models neglect ...