Optimal Scaling for the Proximal Langevin Algorithm in High Dimensions

04/21/2022
by   Natesh S. Pillai, et al.
0

The Metropolis-adjusted Langevin (MALA) algorithm is a sampling algorithm that incorporates the gradient of the logarithm of the target density in its proposal distribution. In an earlier joint work <cit.>, the author had extended the seminal work of <cit.> and showed that in stationarity, MALA applied to an N-dimensional approximation of the target will take O(N^1/3) steps to explore its target measure. It was also shown in <cit.> that, as a consequence of the diffusion limit, the MALA algorithm is optimized at an average acceptance probability of 0.574. In <cit.>, Pereyra introduced the proximal MALA algorithm where the gradient of the log target density is replaced by the proximal function (mainly aimed at implementing MALA non-differentiable target densities). In this paper, we show that for a wide class of twice differentiable target densities, the proximal MALA enjoys the same optimal scaling as that of MALA in high dimensions and also has an average optimal acceptance probability of 0.574. The results of this paper thus give the following practically useful guideline: for smooth target densities where it is expensive to compute the gradient while implementing MALA, users may replace the gradient with the corresponding proximal function (that can be often computed relatively cheaply via convex optimization) without losing any efficiency. This confirms some of the empirical observations made in <cit.>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset