Asymptotic accuracy of the saddlepoint approximation for maximum likelihood estimation
The saddlepoint approximation gives an approximation to the density of a random variable in terms of its moment generating function. When the underlying random variable is itself the sum of n unobserved i.i.d. terms, the basic classical result is that the relative error in the density is of order 1/n. If instead the approximation is interpreted as a likelihood and maximized as a function of model parameters, the result is an approximation to the maximum likelihood estimator (MLE) that is often much faster to compute than the true MLE. This paper proves the analogous basic result for the approximation error between the saddlepoint MLE and the true MLE: it is of order 1/n^2. The proof is based on a factorization of the saddlepoint likelihood into an exact and approximate term, along with an analysis of the approximation error in the gradient of the log-likelihood. This factorization also gives insight into alternatives to the saddlepoint approximation, including a new and simpler saddlepoint approximation, for which we derive analogous error bounds. In addition, we prove asymptotic central limit theorem results for the sampling distribution of the saddlepoint MLE and for the Bayesian posterior distribution based on the saddlepoint likelihood. Notably, in the asymptotic regime that we consider, the difference between the true and approximate MLEs is negligible compared to the asymptotic size of the confidence region for the MLE. In particular, the true MLE and the saddlepoint MLE have the same asymptotic coverage properties, and the saddlepoint MLE can be used as a readily calculated substitute when the true MLE is difficult to compute.
READ FULL TEXT