Multi-Agent Fully Decentralized Value Function Learning with Linear Convergence Rates

10/17/2018
by   Lucas Cassano, et al.
0

This work develops a fully decentralized multi-agent algorithm for policy evaluation. Our proposed scheme can be applied to two distinct scenarios. In the first one, a collection of agents have distinct datasets gathered following different behavior policies (none of which is required to explore the full state space) in different instances of the same environment and they all collaborate to evaluate a common target policy. The network approach allows for efficient exploration of the state space and allows all agents to converge to the optimal solution even in situations where neither agent can converge on its own without cooperation. The second scenario we consider is that of multi-agent games, in which the state is global and rewards are local. In this scenario agents collaborate to estimate the value function of a target team policy. Our proposed algorithm combines off-policy learning, eligibility traces and linear function approximation. The proposed algorithm is of the variance reduced kind and achieves linear convergence with O(1) memory requirements. We provide a theorem which guarantees the linear convergence of our algorithm and show simulations to illustrate the effectiveness of our method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset