Memory capacity of neural networks with threshold and ReLU activations

01/20/2020
by   Roman Vershynin, et al.
7

Overwhelming theoretical and empirical evidence shows that mildly overparametrized neural networks – those with more connections than the size of the training data – are often able to memorize the training data with 100% accuracy. This was rigorously proved for networks with sigmoid activation functions and, very recently, for ReLU activations. Addressing a 1988 open question of Baum, we prove that this phenomenon holds for general multilayered perceptrons, i.e. neural networks with threshold activation functions, or with any mix of threshold and ReLU activations. Our construction is probabilistic and exploits sparsity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset