Efficient Penalized Generalized Linear Mixed Models for Variable Selection and Genetic Risk Prediction in High-Dimensional Data

06/24/2022
by   Julien St-Pierre, et al.
0

Sparse regularized regression methods are now widely used in genome-wide association studies (GWAS) to address the multiple testing burden that limits discovery of potentially important predictors. Linear mixed models (LMMs) have become an attractive alternative to principal components (PC) adjustment to account for population structure and relatedness in high-dimensional penalized models. However, their use in binary trait GWAS rely on the invalid assumption that the residual variance does not depend on the estimated regression coefficients. Moreover, LMMs use a single spectral decomposition of the covariance matrix of the responses, which is no longer possible in generalized linear mixed models (GLMMs). We introduce a new method called pglmm, a penalized GLMM that allows to simultaneously select genetic markers and estimate their effects, accounting for between-individual correlations and binary nature of the trait. We develop a computationally efficient algorithm based on PQL estimation that allows to scale regularized mixed models on high-dimensional binary trait GWAS ( 300,000 SNPs). We show through simulations that penalized LMM and logistic regression with PC adjustment fail to correctly select important predictors and/or that prediction accuracy decreases for a binary response when the dimensionality of the relatedness matrix is high compared to pglmm. Further, we demonstrate through the analysis of two polygenic binary traits in the UK Biobank data that our method can achieve higher predictive performance, while also selecting fewer predictors than a sparse regularized logistic lasso with PC adjustment. Our method is available as a Julia package PenalizedGLMM.jl.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset