Ranking Deep Learning Generalization using Label Variation in Latent Geometry Graphs

11/25/2020
by   Carlos Lassance, et al.
0

Measuring the generalization performance of a Deep Neural Network (DNN) without relying on a validation set is a difficult task. In this work, we propose exploiting Latent Geometry Graphs (LGGs) to represent the latent spaces of trained DNN architectures. Such graphs are obtained by connecting samples that yield similar latent representations at a given layer of the considered DNN. We then obtain a generalization score by looking at how strongly connected are samples of distinct classes in LGGs. This score allowed us to rank 3rd on the NeurIPS 2020 Predicting Generalization in Deep Learning (PGDL) competition.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset