QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

06/01/2021
by   Maxime Vono, et al.
0

Federated learning aims at conducting inference when data are decentralised and locally stored on several clients, under two main constraints: data ownership and communication overhead. In this paper, we address these issues under the Bayesian paradigm. To this end, we propose a novel Markov chain Monte Carlo algorithm coined built upon quantised versions of stochastic gradient Langevin dynamics. To improve performance in a big data regime, we introduce variance-reduced alternatives of our methodology referred to as ^⋆ and ^++. We provide both non-asymptotic and asymptotic convergence guarantees for the proposed algorithms and illustrate their benefits on several federated learning benchmarks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset