Estimation of prediction error with known covariate shift

05/04/2022
by   Hui Xu, et al.
0

In supervised learning, the estimation of prediction error on unlabeled test data is an important task. Existing methods are usually built on the assumption that the training and test data are sampled from the same distribution, which is often violated in practice. As a result, traditional estimators like cross-validation (CV) will be biased and this may result in poor model selection. In this paper, we assume that we have a test dataset in which the feature values are available but not the outcome labels, and focus on a particular form of distributional shift called "covariate shift". We propose an alternative method based on parametric bootstrap of the target of conditional error. Empirically, our method outperforms CV for both simulation and real data example across different modeling tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset