A Revision of Neural Tangent Kernel-based Approaches for Neural Networks

07/02/2020
by   Kyung-Su Kim, et al.
0

Recent theoretical works based on the neural tangent kernel (NTK) have shed light on the optimization and generalization of over-parameterized networks, and partially bridge the gap between their practical success and classical learning theory. Especially, using the NTK-based approach, the following three representative results were obtained: (1) A training error bound was derived to show that networks can fit any finite training sample perfectly by reflecting a tighter characterization of training speed depending on the data complexity. (2) A generalization error bound invariant of network size was derived by using a data-dependent complexity measure (CMD). It follows from this CMD bound that networks can generalize arbitrary smooth functions. (3) A simple and analytic kernel function was derived as indeed equivalent to a fully-trained network. This kernel outperforms its corresponding network and the existing gold standard, Random Forests, in few shot learning. For all of these results to hold, the network scaling factor κ should decrease w.r.t. sample size n. In this case of decreasing κ, however, we prove that the aforementioned results are surprisingly erroneous. It is because the output value of trained network decreases to zero when κ decreases w.r.t. n. To solve this problem, we tighten key bounds by essentially removing κ-affected values. Our tighter analysis resolves the scaling problem and enables the validation of the original NTK-based results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset