Bayesian leave-one-out cross-validation for large data

04/24/2019
by   Måns Magnusson, et al.
20

Model inference, such as model comparison, model checking, and model selection, is an important part of model development. Leave-one-out cross-validation (LOO) is a general approach for assessing the generalizability of a model, but unfortunately, LOO does not scale well to large datasets. We propose a combination of using approximate inference techniques and probability-proportional-to-size-sampling (PPS) for fast LOO model evaluation for large datasets. We provide both theoretical and empirical results showing good properties for large data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset