Modern deep networks are trained with stochastic gradient descent (SGD) ...
Learning generic high-dimensional tasks is notably hard, as it requires ...
Understanding when the noise in stochastic gradient descent (SGD) affect...
A central question of machine learning is how deep nets manage to learn ...
Despite their success, understanding how convolutional neural networks (...
It is widely believed that the success of deep networks lies in their ab...
Recently, several theories including the replica method made predictions...
Reinforcement learning is made much more complex when the agent's observ...
Convolutional neural networks perform a local and translationally-invari...
Understanding why deep nets can classify data in large dimensions remain...
Deep learning algorithms are responsible for a technological revolution ...
We study how neural networks compress uninformative input space in model...
We investigate how the training curve of isotropic kernel methods depend...
Two distinct limits for deep learning as the net width h→∞ have been
pro...
How many training data are needed to learn a supervised task? It is ofte...
We provide a description for the evolution of the generalization perform...
We argue that in fully-connected networks a phase transition delimits th...
Deep learning has been immensely successful at a variety of tasks, rangi...