Neural Network Memorization Dissection

11/21/2019
by   Jindong Gu, et al.
0

Deep neural networks (DNNs) can easily fit a random labeling of the training data with zero training error. What is the difference between DNNs trained with random labels and the ones trained with true labels? Our paper answers this question with two contributions. First, we study the memorization properties of DNNs. Our empirical experiments shed light on how DNNs prioritize the learning of simple input patterns. In the second part, we propose to measure the similarity between what different DNNs have learned and memorized. With the proposed approach, we analyze and compare DNNs trained on data with true labels and random labels. The analysis shows that DNNs have One way to Learn and N ways to Memorize. We also use gradient information to gain an understanding of the analysis results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset