Towards Global Optimality in Cooperative MARL with Sequential Transformation

07/12/2022
by   Jianing Ye, et al.
0

Policy learning in multi-agent reinforcement learning (MARL) is challenging due to the exponential growth of joint state-action space with respect to the number of agents. To achieve higher scalability, the paradigm of centralized training with decentralized execution (CTDE) is broadly adopted with factorized structure in MARL. However, we observe that existing CTDE algorithms in cooperative MARL cannot achieve optimality even in simple matrix games. To understand this phenomenon, we introduce a framework of Generalized Multi-Agent Actor-Critic with Policy Factorization (GPF-MAC), which characterizes the learning of factorized joint policies, i.e., each agent's policy only depends on its own observation-action history. We show that most popular CTDE MARL algorithms are special instances of GPF-MAC and may be stuck in a suboptimal joint policy. To address this issue, we present a novel transformation framework that reformulates a multi-agent MDP as a special "single-agent" MDP with a sequential structure and can allow employing off-the-shelf single-agent reinforcement learning (SARL) algorithms to efficiently learn corresponding multi-agent tasks. This transformation retains the optimality guarantee of SARL algorithms into cooperative MARL. To instantiate this transformation framework, we propose a Transformed PPO, called T-PPO, which can theoretically perform optimal policy learning in the finite multi-agent MDPs and shows significant outperformance on a large set of cooperative multi-agent tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset