Under mild assumptions, we investigate the structure of loss landscape o...
We propose an optimistic estimate to evaluate the best possible fitting
...
In this paper, a class of smoothing modulus-based iterative method was
p...
Dropout is a widely utilized regularization technique in the training of...
In this work, we systematically investigate linear multi-step methods fo...
Brain-inspired spiking neural networks (SNNs) replace the multiply-accum...
Despite the simplicity, stochastic gradient descent (SGD)-like algorithm...
The phenomenon of distinct behaviors exhibited by neural networks under
...
In this paper, a semantic communication framework for image transmission...
Models with nonlinear architectures/parameterizations such as deep neura...
Spiking neural networks (SNN) are a viable alternative to conventional
a...
Soft errors in large VLSI circuits pose dramatic influence on computing-...
In this paper, the problem of semantic-based efficient image transmissio...
In this paper, a semantic communication framework is proposed for textua...
Spiking neural networks (SNNs) recently gained momentum due to their
low...
Unraveling the general structure underlying the loss landscapes of deep
...
Gradient descent or its variants are popular in training neural networks...
Substantial work indicates that the dynamics of neural networks (NNs) is...
Winograd convolution is originally proposed to reduce the computing over...
In recent years, understanding the implicit regularization of neural net...
Understanding deep learning is increasingly emergent as it penetrates mo...
In-memory deep learning computes neural network models where they are st...
We prove a general Embedding Principle of loss landscape of deep neural
...
Compiler frameworks are crucial for the widespread use of FPGA-based dee...
In this paper, we propose a model-operator-data network (MOD-Net) for so...
Machine learning (ML) models trained on personal data have been shown to...
Understanding the structure of loss landscape of deep neural networks
(D...
Deep neural networks (DNN) have achieved remarkable success in computer
...
It is important to study what implicit regularization is imposed on the ...
Deep neural network (DNN) usually learns the target function from low to...
Spiking neural networks (SNNs) have advantages in latency and energy
eff...
In an attempt to better understand structural benefits and generalizatio...
Neural networks training on edge terminals is essential for edge AI
comp...
An increasingly popular method for solving a constrained combinatorial
o...
In this paper, we prove the convergence from the atomistic model to the
...
Why heavily parameterized neural networks (NNs) do not overfit the data ...
In this paper, the problem of enhancing the quality of virtual reality (...
The Internet of Vehicles (IoV) is an application of the Internet of thin...
Learn in-situ is a growing trend for Edge AI. Training deep neural netwo...
A supervised learning problem is to find a function in a hypothesis func...
Recent works show an intriguing phenomenon of Frequency Principle
(F-Pri...
It has been an important approach of using matrix completion to perform ...
How neural network behaves during the training over different choices of...
Gradient descent yields zero training loss in polynomial time for deep n...
Deep learning has significantly revolutionized the design of numerical
a...
Edge devices demand low energy consumption, cost and small form factor. ...
In this paper, the problem of optimizing the deployment of unmanned aeri...
Along with fruitful applications of Deep Neural Networks (DNNs) to reali...
It remains a puzzle that why deep neural networks (DNNs), with more
para...
How different initializations and loss functions affect the learning of ...