Nonconvex Sparse Learning via Stochastic Optimization with Progressive Variance Reduction

05/09/2016
by   Xingguo Li, et al.
0

We propose a stochastic variance reduced optimization algorithm for solving sparse learning problems with cardinality constraints. Sufficient conditions are provided, under which the proposed algorithm enjoys strong linear convergence guarantees and optimal estimation accuracy in high dimensions. We further extend the proposed algorithm to an asynchronous parallel variant with a near linear speedup. Numerical experiments demonstrate the efficiency of our algorithm in terms of both parameter estimation and computational performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset