Support recovery without incoherence: A case for nonconvex regularization

12/17/2014
by   Po-Ling Loh, et al.
0

We demonstrate that the primal-dual witness proof method may be used to establish variable selection consistency and ℓ_∞-bounds for sparse regression problems, even when the loss function and/or regularizer are nonconvex. Using this method, we derive two theorems concerning support recovery and ℓ_∞-guarantees for the regression estimator in a general setting. Our results provide rigorous theoretical justification for the use of nonconvex regularization: For certain nonconvex regularizers with vanishing derivative away from the origin, support recovery consistency may be guaranteed without requiring the typical incoherence conditions present in ℓ_1-based methods. We then derive several corollaries that illustrate the wide applicability of our method to analyzing composite objective functions involving losses such as least squares, nonconvex modified least squares for errors-in variables linear regression, the negative log likelihood for generalized linear models, and the graphical Lasso. We conclude with empirical studies to corroborate our theoretical predictions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset