There and Back Again: A General Approach to Learning Sparse Models

06/25/2017
by   Vatsal Sharan, et al.
0

We propose a simple and efficient approach to learning sparse models. Our approach consists of (1) projecting the data into a lower dimensional space, (2) learning a dense model in the lower dimensional space, and then (3) recovering the sparse model in the original space via compressive sensing. We apply this approach to Non-negative Matrix Factorization (NMF), tensor decomposition and linear classification---showing that it obtains 10× compression with negligible loss in accuracy on real data, and obtains up to 5× speedups. Our main theoretical contribution is to show the following result for NMF: if the original factors are sparse, then their projections are the sparsest solutions to the projected NMF problem. This explains why our method works for NMF and shows an interesting new property of random projections: they can preserve the solutions of non-convex optimization problems such as NMF.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset