On the Convergence of Gradient Descent Training for Two-layer ReLU-networks in the Mean Field Regime

05/27/2020
by   Stephan Wojtowytsch, et al.
0

We describe a necessary and sufficient condition for the convergence to minimum Bayes risk when training two-layer ReLU-networks by gradient descent in the mean field regime with omni-directional initial parameter distribution. This article extends recent results of Chizat and Bach to ReLU-activated networks and to the situation in which there are no parameters which exactly achieve MBR. The condition does not depend on the initalization of parameters and concerns only the weak convergence of the realization of the neural network, not its parameter distribution.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset