Recent advancements in diffusion models have showcased their impressive
...
State-of-the-art deep neural networks are trained with large amounts
(mi...
LLMs have demonstrated remarkable abilities at interacting with humans
t...
Diffusion models have proven to be highly effective in generating
high-q...
Data pruning aims to obtain lossless performances as training on the ori...
Dataset distillation reduces the network training cost by synthesizing s...
Despite the recent visually-pleasing results achieved, the massive
compu...
The power of Deep Neural Networks (DNNs) depends heavily on the training...
We present an efficient text-to-video generation framework based on late...
Have you ever imagined what a corgi-alike coffee machine or a tiger-alik...
Existing fine-tuning methods either tune all parameters of the pre-train...
Modern deep neural networks (DNNs) have achieved state-of-the-art
perfor...
Recent studies show that Vision Transformers(ViTs) exhibit strong robust...
In this paper, we propose M^2BEV, a unified framework that jointly perfo...
Recent Vision Transformer (ViT) models have demonstrated encouraging res...
This paper provides a strong baseline for vision transformers on the Ima...
Vision transformers (ViTs) have been successfully applied in image
class...
Current neural architecture search (NAS) algorithms still require expert...
Recent studies on mobile network design have demonstrated the remarkable...
Pre-trained language models like BERT and its variants have recently ach...
Spiking neural networks (SNNs) have shown clear advantages over traditio...
Despite the great progress made by deep CNNs in image semantic segmentat...
The recent WSNet [1] is a new model compression method through sampling
...