Guaranteed Accuracy of Semi-Modular Posteriors

01/26/2023
by   David T. Frazier, et al.
0

Bayesian inference has widely acknowledged advantages in many problems, but it can also be unreliable when the model is misspecified. Bayesian modular inference is concerned with complex models which have been specified through a collection of coupled submodels, and is useful when there is misspecification in some of the submodels. The submodels are often called modules in the literature. Cutting feedback is a widely used Bayesian modular inference method which ensures that information from suspect model components is not used in making inferences about parameters in correctly specified modules. However, it may be hard to decide in what circumstances this “cut posterior” is preferred to the exact posterior. When misspecification is not severe, cutting feedback may increase the uncertainty in Bayesian posterior inference greatly without reducing estimation bias substantially. This motivates semi-modular inference methods, which avoid the binary cut of cutting feedback approaches. In this work, we precisely formalize the bias-variance trade-off involved in semi-modular inference for the first time in the literature, using a framework of local model misspecification. We then implement a mixture-based semi-modular inference approach, demonstrating theoretically that it delivers inferences that are more accurate, in terms of a user-defined loss function, than either the cut or full posterior on its own. The new method is demonstrated in a number of applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset