Robust Bayesian inference in complex models with possibility theory
We propose a general solution to the problem of robust Bayesian inference in complex settings where outliers may be present. In practice, the automation of robust Bayesian analyses is important in the many applications involving large and complex datasets. The proposed solution relies on a reformulation of Bayesian inference based on possibility theory, and leverages the observation that, in this context, the marginal likelihood of the data assesses the consistency between prior and likelihood rather than model fitness. Our approach does not require additional parameters in its simplest form and has a limited impact on the computational complexity when compared to non-robust solutions. The generality of our solution is demonstrated via applications on simulated and real data including matrix estimation and change-point detection.
READ FULL TEXT