Adaptation of the Tuning Parameter in General Bayesian Inference with Robust Divergence
We introduce a methodology for robust Bayesian estimation with robust divergence (e.g., density power divergence or γ-divergence), indexed by a single tuning parameter. It is well known that the posterior density induced by robust divergence gives highly robust estimators against outliers if the tuning parameter is appropriately and carefully chosen. In a Bayesian framework, one way to find the optimal tuning parameter would be using evidence (marginal likelihood). However, we numerically illustrate that evidence induced by the density power divergence does not work to select the optimal tuning parameter since robust divergence is not regarded as a statistical model. To overcome the problems, we treat the exponential of robust divergence as an unnormalized statistical model, and we estimate the tuning parameter via minimizing the Hyvarinen score. We also provide adaptive computational methods based on sequential Monte Carlo (SMC) samplers, which enables us to obtain the optimal tuning parameter and samples from posterior distributions simultaneously. The empirical performance of the proposed method through simulations and an application to real data are also provided.
READ FULL TEXT