A Discrete Bouncy Particle Sampler
Markov Chain Monte Carlo (MCMC) algorithms are statistical methods designed to sample from a given probability density π. Most MCMC methods rely on discrete-time Metropolis-Hastings Markov chains that are reversible with respect to the probability density π. Nevertheless, it is now understood that the use of non-reversible Markov chains can be beneficial in many contexts. In particular, the recently-proposed Bouncy Particle Sampler (BPS) leverages a continuous-time and non-reversible Markov process to compute expectations with respect to π. Although the BPS empirically shows state-of-the-art performances when used to explore certain probability densities, in many situations it is not straightforward to use; indeed, implementing the BPS typically requires one to be able to compute local upper bounds on the target density. This, for example, rules out the use of the BPS for the wide class of problems when only evaluations of the log-density and its gradient are available. In this article, we propose the Discrete Bouncy Particle Sampler (DBPS), a general algorithm based upon a guided random walk and the Delayed-Rejection approach. In particular, we show that the BPS can be understood as a scaling limit of a special case of the DBPS. In contrast to the BPS, implementing the DBPS only requires point-wise evaluation of the target density and its gradient. Importantly, we also propose extensions of the basic DBPS for situations when exact gradient of the target densities are not available.
READ FULL TEXT