Learning Stationary Nash Equilibrium Policies in n-Player Stochastic Games with Independent Chains via Dual Mirror Descent

01/28/2022
by   S. Rasoul Etesami, et al.
0

We consider a subclass of n-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players' internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each other's states/actions. Under some assumptions on the structure of the payoff functions, we develop efficient learning algorithms based on dual averaging and dual mirror descent, which provably converge almost surely or in expectation to the set of ϵ-Nash equilibrium policies. In particular, we derive upper bounds on the number of iterates that scale polynomially in terms of the game parameters to achieve an ϵ-Nash equilibrium policy. In addition to Markov potential games and linear-quadratic stochastic games, this work provides another subclass of n-player stochastic games that provably admit polynomial-time learning algorithms for finding their ϵ-Nash equilibrium policies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset