Minimalistic Unsupervised Learning with the Sparse Manifold Transform

09/30/2022
by   Yubei Chen, et al.
19

We describe a minimalistic and interpretable method for unsupervised learning, without resorting to data augmentation, hyperparameter tuning, or other engineering designs, that achieves performance close to the SOTA SSL methods. Our approach leverages the sparse manifold transform, which unifies sparse coding, manifold learning, and slow feature analysis. With a one-layer deterministic sparse manifold transform, one can achieve 99.3 accuracy on MNIST, 81.1 With a simple gray-scale augmentation, the model gets 83.2 on CIFAR-10 and 57 between simplistic “white-box” methods and the SOTA methods. Additionally, we provide visualization to explain how an unsupervised representation transform is formed. The proposed method is closely connected to latent-embedding self-supervised methods and can be treated as the simplest form of VICReg. Though there remains a small performance gap between our simple constructive model and SOTA methods, the evidence points to this as a promising direction for achieving a principled and white-box approach to unsupervised learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset