Robustness and sample complexity of model-based MARL for general-sum Markov games

10/05/2021
โˆ™
by   Jayakumar Subramanian, et al.
โˆ™
0
โˆ™

Multi-agent reinfocement learning (MARL) is often modeled using the framework of Markov games (also called stochastic games or dynamic games). Most of the existing literature on MARL concentrates on zero-sum Markov games but is not applicable to general-sum Markov games. It is known that the best-response dynamics in general-sum Markov games are not a contraction. Therefore, different equilibrium in general-sum Markov games can have different values. Moreover, the Q-function is not sufficient to completely characterize the equilibrium. Given these challenges, model based learning is an attractive approach for MARL in general-sum Markov games. In this paper, we investigate the fundamental question of sample complexity for model-based MARL algorithms in general-sum Markov games and show that ๐’ชฬƒ(|๐’ฎ| |๐’œ| (1-ฮณ)^-2ฮฑ^-2) samples are sufficient to obtain a ฮฑ-approximate Markov perfect equilibrium with high probability, where ๐’ฎ is the state space, ๐’œ is the joint action space of all players, and ฮณ is the discount factor, and the ๐’ชฬƒ(ยท) notation hides logarithmic terms. To obtain these results, we study the robustness of Markov perfect equilibrium to model approximations. We show that the Markov perfect equilibrium of an approximate (or perturbed) game is always an approximate Markov perfect equilibrium of the original game and provide explicit bounds on the approximation error. We illustrate the results via a numerical example.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset