Bayesian iterative screening in ultra-high dimensional settings
Variable selection in ultra-high dimensional linear regression is often preceded by a screening step to significantly reduce the dimension. Here a Bayesian variable screening method (BITS) is developed. BITS can successfully integrate prior knowledge, if any, on effect sizes, and the number of true variables. BITS iteratively includes potential variables with the highest posterior probability accounting for the already selected variables. It is implemented by a fast Cholesky update algorithm and is shown to have the screening consistency property. BITS is built based on a model with Gaussian errors, yet, the screening consistency is proved to hold under more general tail conditions. The notion of posterior screening consistency allows the resulting model to provide a good starting point for further Bayesian variable selection methods. A new screening consistent stopping rule based on posterior probability is developed. Simulation studies and real data examples are used to demonstrate scalability and fine screening performance.
READ FULL TEXT