Stability of Gradient Learning Dynamics in Continuous Games: Vector Action Spaces

11/07/2020
by   Benjamin J. Chasnov, et al.
0

Towards characterizing the optimization landscape of games, this paper analyzes the stability of gradient-based dynamics near fixed points of two-player continuous games. We introduce the quadratic numerical range as a method to characterize the spectrum of game dynamics and prove the robustness of equilibria to variations in learning rates. By decomposing the game Jacobian into symmetric and skew-symmetric components, we assess the contribution of a vector field's potential and rotational components to the stability of differential Nash equilibria. Our results show that in zero-sum games, all Nash are stable and robust; in potential games, all stable points are Nash. For general-sum games, we provide a sufficient condition for instability. We conclude with a numerical example in which learning with timescale separation results in faster convergence.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro