Cross-scene generalizable NeRF models, which can directly synthesize nov...
Sparsely-gated Mixture of Expert (MoE), an emerging deep model architect...
Despite the fact that adversarial training has become the de facto metho...
Large Language Models (LLMs), despite their recent impressive
accomplish...
Graphs are omnipresent and GNNs are a powerful family of neural networks...
Large pre-trained transformers have been receiving explosive attention i...
Large pre-trained transformers are show-stealer in modern-day deep learn...
Sparse Neural Networks (SNNs) have received voluminous attention
predomi...
Despite their remarkable achievement, gigantic transformers encounter
si...
Learning to Optimize (L2O) has drawn increasing attention as it often
re...
Learning to optimize (L2O) has gained increasing popularity, which autom...
Deep neural networks (DNNs) have rapidly become a de facto choice
for me...
Recent works have impressively demonstrated that there exists a subnetwo...
Multi-task learning (MTL) encapsulates multiple learned tasks in a singl...
Despite the enormous success of Graph Convolutional Networks (GCNs) in
m...
Large-scale graph training is a notoriously challenging problem for grap...
The deployment constraints in practical applications necessitate the pru...
This paper targets at improving the generalizability of hypergraph neura...
Vision Transformers (ViTs) have proven to be effective, in solving 2D im...
We present Generalizable NeRF Transformer (GNT), a pure, unified
transfo...
Representing visual signals by coordinate-based deep fully-connected net...
Transformers have quickly shined in the computer vision world since the
...
Neural Radiance Field (NeRF) regresses a neural parameterized scene by
d...
Class-incremental learning (CIL) suffers from the notorious dilemma betw...
Certifiable robustness is a highly desirable property for adopting deep
...
With the rapid development of deep learning, the sizes of neural network...
Trojan attacks threaten deep neural networks (DNNs) by poisoning them to...
With the latest advances in deep learning, there has been a lot of focus...
Recent studies on Learning to Optimize (L2O) suggest a promising path to...
Selecting an appropriate optimizer for a given problem is of major inter...
Vision transformers (ViTs) have gained increasing popularity as they are...
Vision Transformer (ViT) has recently demonstrated promise in computer v...
Generative adversarial networks (GANs) have received an upsurging intere...
The lottery ticket hypothesis (LTH) has shown that dense models contain
...
Random pruning is arguably the most naive way to attain sparsity in neur...
Self-supervision is recently surging at its new frontier of graph learni...
Contrastive learning approaches have achieved great success in learning
...
Despite tremendous success in many application scenarios, the training a...
Gigantic pre-trained models have become central to natural language
proc...
Despite the recent advances of graph neural networks (GNNs) in modeling ...
This work presents a lifelong learning approach to train a multilingual
...
Foundational work on the Lottery Ticket Hypothesis has suggested an exci...
Training deep graph neural networks (GNNs) is notoriously hard. Besides ...
Semantic segmentation for scene understanding is nowadays widely demande...
There have been long-standing controversies and inconsistencies over the...
Recent works on sparse neural networks have demonstrated that it is poss...
In this paper, we present a perception-action-communication loop design ...
Works on lottery ticket hypothesis (LTH) and single-shot network pruning...
Sparse adversarial attacks can fool deep neural networks (DNNs) by only
...
Vision transformers (ViTs) have recently received explosive popularity, ...