Sparse estimation via ℓ_q optimization method in high-dimensional linear regression

11/12/2019
by   Xin Li, et al.
0

In this paper, we discuss the statistical properties of the ℓ_q optimization methods (0<q≤ 1), including the ℓ_q minimization method and the ℓ_q regularization method, for estimating a sparse parameter from noisy observations in high-dimensional linear regression with either a deterministic or random design. For this purpose, we introduce a general q-restricted eigenvalue condition (REC) and provide its sufficient conditions in terms of several widely-used regularity conditions such as sparse eigenvalue condition, restricted isometry property, and mutual incoherence property. By virtue of the q-REC, we exhibit the stable recovery property of the ℓ_q optimization methods for either deterministic or random designs by showing that the ℓ_2 recovery bound O(ϵ^2) for the ℓ_q minimization method and the oracle inequality and ℓ_2 recovery bound O(λ^2/2-qs) for the ℓ_q regularization method hold respectively with high probability. The results in this paper are nonasymptotic and only assume the weak q-REC. The preliminary numerical results verify the established statistical property and demonstrate the advantages of the ℓ_q regularization method over some existing sparse optimization methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset