An MCMC Approach to Empirical Bayes Inference and Bayesian Sensitivity Analysis via Empirical Processes

07/05/2018
by   Hani Doss, et al.
0

Consider a Bayesian situation in which we observe Y ∼ p_θ, where θ∈Θ, and we have a family {ν_h, h ∈H} of potential prior distributions on Θ. Let g be a real-valued function of θ, and let I_g(h) be the posterior expectation of g(θ) when the prior is ν_h. We are interested in two problems: (i) selecting a particular value of h, and (ii) estimating the family of posterior expectations { I_g(h), h ∈H}. Let m_y(h) be the marginal likelihood of the hyperparameter h: m_y(h) = ∫ p_θ(y) ν_h(dθ). The empirical Bayes estimate of h is, by definition, the value of h that maximizes m_y(h). It turns out that it is typically possible to use Markov chain Monte Carlo to form point estimates for m_y(h) and I_g(h) for each individual h in a continuum, and also confidence intervals for m_y(h) and I_g(h) that are valid pointwise. However, we are interested in forming estimates, with confidence statements, of the entire families of integrals { m_y(h), h ∈H} and { I_g(h), h ∈H}: we need estimates of the first family in order to carry out empirical Bayes inference, and we need estimates of the second family in order to do Bayesian sensitivity analysis. We establish strong consistency and functional central limit theorems for estimates of these families by using tools from empirical process theory. We give two applications, one to Latent Dirichlet Allocation, which is used in topic modelling, and the other is to a model for Bayesian variable selection in linear regression.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset