A Revisit to De-biased Lasso for Generalized Linear Models

06/23/2020
by   Lu Xia, et al.
0

De-biased lasso has emerged as a popular tool to draw statistical inference for high-dimensional regression models. However, simulations indicate that for generalized linear models (GLMs), de-biased lasso inadequately removes biases and yields unreliable confidence intervals. This motivates us to scrutinize the application of de-biased lasso in high-dimensional GLMs. When p >n, we detect that a key sparsity condition on the inverse information matrix generally does not hold in a GLM setting, which likely explains the subpar performance of de-biased lasso. Even in a less challenging "large n, diverging p" scenario, we find that de-biased lasso and the maximum likelihood method often yield confidence intervals with unsatisfactory coverage probabilities. In this scenario, we examine an alternative approach for further bias correction by directly inverting the Hessian matrix without imposing the matrix sparsity assumption. We establish the asymptotic distributions of any linear combinations of the resulting estimates, which lay the theoretical groundwork for drawing inference. Simulations show that this refined de-biased estimator performs well in removing biases and yields an honest confidence interval coverage. We illustrate the method by analyzing a prospective hospital-based Boston Lung Cancer Study, a large scale epidemiology cohort investigating the joint effects of genetic variants on lung cancer risk.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset