Nearest Class-Center Simplification through Intermediate Layers

01/21/2022
by   Ido Ben-Shaul, et al.
3

Recent advances in theoretical Deep Learning have introduced geometric properties that occur during training, past the Interpolation Threshold – where the training error reaches zero. We inquire into the phenomena coined Neural Collapse in the intermediate layers of the networks, and emphasize the innerworkings of Nearest Class-Center Mismatch inside the deepnet. We further show that these processes occur both in vision and language model architectures. Lastly, we propose a Stochastic Variability-Simplification Loss (SVSL) that encourages better geometrical features in intermediate layers, and improves both train metrics and generalization.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset