An Explicit Expansion of the Kullback-Leibler Divergence along its Fisher-Rao Gradient Flow
Let V_* : ℝ^d →ℝ be some (possibly non-convex) potential function, and consider the probability measure π∝ e^-V_*. When π exhibits multiple modes, it is known that sampling techniques based on Wasserstein gradient flows of the Kullback-Leibler (KL) divergence (e.g. Langevin Monte Carlo) suffer poorly in the rate of convergence, where the dynamics are unable to easily traverse between modes. In stark contrast, the work of Lu et al. (2019; 2022) has shown that the gradient flow of the KL with respect to the Fisher-Rao (FR) geometry exhibits a convergence rate to π is that independent of the potential function. In this short note, we complement these existing results in the literature by providing an explicit expansion of KL(ρ_t^FRπ) in terms of e^-t, where (ρ_t^FR)_t≥ 0 is the FR gradient flow of the KL divergence. In turn, we are able to provide a clean asymptotic convergence rate, where the burn-in time is guaranteed to be finite. Our proof is based on observing a similarity between FR gradient flows and simulated annealing with linear scaling, and facts about cumulant generating functions. We conclude with simple synthetic experiments that demonstrate our theoretical findings are indeed tight. Based on our numerics, we conjecture that the asymptotic rates of convergence for Wasserstein-Fisher-Rao gradient flows are possibly related to this expansion in some cases.
READ FULL TEXT