Refining Amortized Posterior Approximations using Gradient-Based Summary Statistics

05/15/2023
by   Rafael Orozco, et al.
0

We present an iterative framework to improve the amortized approximations of posterior distributions in the context of Bayesian inverse problems, which is inspired by loop-unrolled gradient descent methods and is theoretically grounded in maximally informative summary statistics. Amortized variational inference is restricted by the expressive power of the chosen variational distribution and the availability of training data in the form of joint data and parameter samples, which often lead to approximation errors such as the amortization gap. To address this issue, we propose an iterative framework that refines the current amortized posterior approximation at each step. Our approach involves alternating between two steps: (1) constructing a training dataset consisting of pairs of summarized data residuals and parameters, where the summarized data residual is generated using a gradient-based summary statistic, and (2) training a conditional generative model – a normalizing flow in our examples – on this dataset to obtain a probabilistic update of the unknown parameter. This procedure leads to iterative refinement of the amortized posterior approximations without the need for extra training data. We validate our method in a controlled setting by applying it to a stylized problem, and observe improved posterior approximations with each iteration. Additionally, we showcase the capability of our method in tackling realistically sized problems by applying it to transcranial ultrasound, a high-dimensional, nonlinear inverse problem governed by wave physics, and observe enhanced posterior quality through better image reconstruction with the posterior mean.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset