Learning to Optimize (L2O), a technique that utilizes machine learning t...
Transformers have quickly shined in the computer vision world since the
...
Random pruning is arguably the most naive way to attain sparsity in neur...
Federated learning (FL) enables distribution of machine learning workloa...
Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces th...
There have been long-standing controversies and inconsistencies over the...
Recent works on sparse neural networks have demonstrated that it is poss...
Works on lottery ticket hypothesis (LTH) and single-shot network pruning...
In recent years, great success has been witnessed in building
problem-sp...
Lottery Ticket Hypothesis raises keen attention to identifying sparse
tr...
Learning to optimize (L2O) is an emerging approach that leverages machin...
The record-breaking performance of deep neural networks (DNNs) comes wit...
Deep, heavily overparameterized language models such as BERT, XLNet and ...
Multiplication (e.g., convolution) is arguably a cornerstone of modern d...
We present SmartExchange, an algorithm-hardware co-design framework to t...
Many applications require repeatedly solving a certain type of optimizat...
Convolutional neural networks (CNNs) have been increasingly deployed to ...
(Frankle & Carbin, 2019) shows that there exist winning tickets (small b...
Plug-and-play (PnP) is a non-convex framework that integrates modern
den...
This paper seeks to answer the question: as the (near-) orthogonality of...
In recent years, unfolding iterative algorithms as neural networks has b...