Strong overall error analysis for the training of artificial neural networks via random initializations

12/15/2020
by   Arnulf Jentzen, et al.
0

Although deep learning based approximation algorithms have been applied very successfully to numerous problems, at the moment the reasons for their performance are not entirely understood from a mathematical point of view. Recently, estimates for the convergence of the overall error have been obtained in the situation of deep supervised learning, but with an extremely slow rate of convergence. In this note we partially improve on these estimates. More specifically, we show that the depth of the neural network only needs to increase much slower in order to obtain the same rate of approximation. The results hold in the case of an arbitrary stochastic optimization algorithm with i.i.d. random initializations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset