A scalable estimate of the extra-sample prediction error via approximate leave-one-out

01/30/2018
by   Kamiar Rahnama Rad, et al.
0

We propose a scalable closed-form formula (ALO_λ) to estimate the extra-sample prediction error of regularized estimators. Our approach employs existing heuristic arguments to approximate the leave-one-out perturbations. We theoretically prove the accuracy of ALO_λ in the high-dimensional setting where the number of predictors is proportional to the number of observations. We show how this approach can be applied to popular non-differentiable regularizers, such as LASSO, and compare its results with other popular risk estimation techniques, such as Stein's unbiased risk estimate (SURE). Our theoretical findings are illustrated using simulations and real recordings from spatially sensitive neurons (grid cells) in the medial entorhinal cortex of a rat.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset