On Last-Iterate Convergence Beyond Zero-Sum Games

03/22/2022
by   Ioannis Anagnostides, et al.
0

Most existing results about last-iterate convergence of learning dynamics are limited to two-player zero-sum games, and only apply under rigid assumptions about what dynamics the players follow. In this paper we provide new results and techniques that apply to broader families of games and learning dynamics. First, we use a regret-based analysis to show that in a class of games that includes constant-sum polymatrix and strategically zero-sum games, dynamics such as optimistic mirror descent (OMD) have bounded second-order path lengths, a property which holds even when players employ different algorithms and prediction mechanisms. This enables us to obtain O(1/√(T)) rates and optimal O(1) regret bounds. Our analysis also reveals a surprising property: OMD either reaches arbitrarily close to a Nash equilibrium, or it outperforms the robust price of anarchy in efficiency. Moreover, for potential games we establish convergence to an ϵ-equilibrium after O(1/ϵ^2) iterations for mirror descent under a broad class of regularizers, as well as optimal O(1) regret bounds for OMD variants. Our framework also extends to near-potential games, and unifies known analyses for distributed learning in Fisher's market model. Finally, we analyze the convergence, efficiency, and robustness of optimistic gradient descent (OGD) in general-sum continuous games.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset