A common training technique for language models is teacher forcing (TF)....
Current approaches for controlling dialogue response generation are prim...
We introduce Wav2Seq, the first self-supervised approach to pre-train bo...
Professional summaries are written with document-level information, such...
Pre-trained transformer-based sequence-to-sequence models have become th...
We propose encoder-centric stepwise models for extractive summarization ...
In this paper, we report the results of our participation in the TREC-CO...
We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. C...
It is well known that the standard likelihood training and approximate
d...
Deep neural scoring models have recently been shown to improve ranking
q...
Recent trends in natural language processing using pretraining have shif...
Network Embedding (NE) methods, which map network nodes to low-dimension...
We present AUEB's submissions to the BioASQ 6 document and snippet retri...
We explore several new models for document relevance ranking, building u...
The rise of neural networks, and particularly recurrent neural networks,...
We show that small and shallow feed-forward neural networks can achieve ...
We study the use of greedy feature selection methods for morphosyntactic...
Morpho-syntactic lexicons provide information about the morphological an...