Adjusting inverse regression for predictors with clustered distribution

08/29/2023
by   Wei Luo, et al.
0

A major family of sufficient dimension reduction (SDR) methods, called inverse regression, commonly require the distribution of the predictor X to have a linear E(X|β^𝖳X) and a degenerate var(X|β^𝖳X) for the desired reduced predictor β^𝖳X. In this paper, we adjust the first and second-order inverse regression methods by modeling E(X|β^𝖳X) and var(X|β^𝖳X) under the mixture model assumption on X, which allows these terms to convey more complex patterns and is most suitable when X has a clustered sample distribution. The proposed SDR methods build a natural path between inverse regression and the localized SDR methods, and in particular inherit the advantages of both; that is, they are √(n)-consistent, efficiently implementable, directly adjustable under the high-dimensional settings, and fully recovering the desired reduced predictor. These findings are illustrated by simulation studies and a real data example at the end, which also suggest the effectiveness of the proposed methods for nonclustered data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset