Self-Play PSRO: Toward Optimal Populations in Two-Player Zero-Sum Games

07/13/2022
by   Stephen McAleer, et al.
5

In competitive two-agent environments, deep reinforcement learning (RL) methods based on the Double Oracle (DO) algorithm, such as Policy Space Response Oracles (PSRO) and Anytime PSRO (APSRO), iteratively add RL best response policies to a population. Eventually, an optimal mixture of these population policies will approximate a Nash equilibrium. However, these methods might need to add all deterministic policies before converging. In this work, we introduce Self-Play PSRO (SP-PSRO), a method that adds an approximately optimal stochastic policy to the population in each iteration. Instead of adding only deterministic best responses to the opponent's least exploitable population mixture, SP-PSRO also learns an approximately optimal stochastic policy and adds it to the population as well. As a result, SP-PSRO empirically tends to converge much faster than APSRO and in many games converges in just a few iterations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset