Convergence Analysis of Distributed Inference with Vector-Valued Gaussian Belief Propagation
This paper considers inference over distributed linear Gaussian models using factor graphs and Gaussian belief propagation (BP). The distributed inference algorithm involves only local computation of the information matrix and the mean vector, and message passing between neighbors. Under broad conditions, it is shown that the message information matrix converges double exponentially fast to a unique positive definite limit matrix for arbitrary positive semidefinite initialization. A necessary and sufficient convergence condition for the belief mean vector to converge to the optimal centralized estimator under the assumption pertaining to positive semidefinite initialization of the message information matrix is provided. In particular, it is shown that Gaussian BP always converges when the underlying factor graph is given by the union of a forest and a single loop. The proposed convergence condition in the setup of distributed linear Gaussian models is shown to be strictly weaker than several other existing convergence conditions and requirements, including the Gaussian Markov random field based walk-summability condition, and applicable in large classes of scenarios.
READ FULL TEXT