Generalization Ability of Wide Neural Networks on ℝ

02/12/2023
by   Jianfa Lai, et al.
0

We perform a study on the generalization ability of the wide two-layer ReLU neural network on ℝ. We first establish some spectral properties of the neural tangent kernel (NTK): a) K_d, the NTK defined on ℝ^d, is positive definite; b) λ_i(K_1), the i-th largest eigenvalue of K_1, is proportional to i^-2. We then show that: i) when the width m→∞, the neural network kernel (NNK) uniformly converges to the NTK; ii) the minimax rate of regression over the RKHS associated to K_1 is n^-2/3; iii) if one adopts the early stopping strategy in training a wide neural network, the resulting neural network achieves the minimax rate; iv) if one trains the neural network till it overfits the data, the resulting neural network can not generalize well. Finally, we provide an explanation to reconcile our theory and the widely observed “benign overfitting phenomenon”.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset