Recent studies have revealed that the widely-used Pre-trained Language M...
The reusability of state-of-the-art Pre-trained Language Models (PLMs) i...
Recent research has focused on enhancing the capability of smaller model...
Artificial intelligence (AI) researchers have been developing and refini...
Large-scale Pre-Trained Language Models (PTLMs) capture knowledge from
m...
Data-driven predictive solutions predominant in commercial applications ...
Large pre-trained models decay over long-term deployment as input
distri...
Machine translation has seen rapid progress with the advent of
Transform...
Joint visual and language modeling on large-scale datasets has recently ...
We have seen a great progress in video action recognition in recent year...
Toxic language detection systems often falsely flag text that contains
m...
Abstractive summarization, the task of generating a concise summary of i...
A longstanding question in cognitive science concerns the learning mecha...
Neuro-symbolic representations have proved effective in learning structu...
Visual reasoning tasks such as visual question answering (VQA) require a...
We introduce HUBERT which combines the structured-representational power...
Generating formal-language represented by relational tuples, such as Lis...
This paper presents a unified Vision-Language Pre-training (VLP) model. ...
Grounding language to visual relations is critical to various
language-a...
We introduce an architecture, the Tensor Product Recurrent Network (TPRN...
This paper develops a model that addresses sentence embedding, a hot top...