We study the asymptotic overfitting behavior of interpolation with minim...
Memorization of training data is an active research area, yet our
unders...
We study the cost of overfitting in noisy kernel ridge regression (KRR),...
We present a PTAS for learning random constant-depth networks. We show t...
Reconstructing samples from the training set of trained neural networks ...
Linear classifiers and leaky ReLU networks trained by gradient flow on t...
In this work, we study the implications of the implicit bias of gradient...
Despite a great deal of research, it is still not well-understood why tr...
Understanding when neural networks can be learned efficiently is a
funda...
Gradient-based deep-learning algorithms exhibit remarkable performance i...
Understanding to what extent neural networks memorize training data is a...
We study the dynamics and implicit bias of gradient flow (GF) on univari...
We study norm-based uniform convergence bounds for neural networks, aimi...
Despite a great deal of research, it is still unclear why neural network...
We solve an open question from Lu et al. (2017), by showing that any tar...
We study the conjectured relationship between the implicit regularizatio...
We study the memorization power of feedforward ReLU neural networks. We ...
The implicit bias of neural networks has been extensively studied in rec...
We theoretically study the fundamental problem of learning a single neur...
When studying the expressive power of neural networks, a main challenge ...
We prove hardness-of-learning results under a well-studied assumption on...
Understanding the implicit regularization (or implicit bias) of gradient...
Neural networks are nowadays highly successful despite strong hardness
r...
In studying the expressiveness of neural networks, an important question...
Flow networks have attracted a lot of research in computer science. Inde...