Relating counting complexity to non-uniform probability measures

11/24/2017
by   Eleni Bakali, et al.
0

A standard method for designing randomized algorithms to approximately count the number of solutions of a problem in #P, is by constructing a rapidly mixing Markov chain converging to the uniform distribution over this set of solutions. This construction is not always an easy task, and it is conjectured that it is not always possible. We want to investigate other possibilities for using Markov Chains in relation to counting, and whether we can relate algorithmic counting to other, non-uniform, probability distributions over the set we want to count. In this paper we present a family of probability distributions over the set of solutions of a problem in TotP, and show how they relate to counting; counting is equivalent to computing their normalizing factors. We analyse the complexity of sampling, of computing the normalizing factor, and of computing the size support of these distributions. The latter is also equivalent to counting. We also show how the above tasks relate to each other, and to other problems in complexity theory as well. In short, we prove that sampling and approximating the normalizing factor is easy. We do this by constructing a family of rapidly mixing Markov chains for which these distributions are stationary. At the same time we show that exactly computing the normalizing factor is TotP-hard. However the reduction proving the latter is not approximation preserving, which conforms with the fact that TotP-hard problems are inapproximable if NP ≠ RP. The problem we consider is the Size-of-Subtree, a TotP-complete problem under parsimonious reductions. Therefore the results presented here extend to any problem in TotP. TotP is the Karp-closure of self-reducible problems in #P, having decision version in P.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset