Correlation between entropy and generalizability in a neural network

07/05/2022
by   Ge Zhang, et al.
0

Although neural networks can solve very complex machine-learning problems, the theoretical reason for their generalizability is still not fully understood. Here we use Wang-Landau Mote Carlo algorithm to calculate the entropy (logarithm of the volume of a part of the parameter space) at a given test accuracy, and a given training loss function value or training accuracy. Our results show that entropical forces help generalizability. Although our study is on a very simple application of neural networks (a spiral dataset and a small, fully-connected neural network), our approach should be useful in explaining the generalizability of more complicated neural networks in future works.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset