Learning Parameterized Families of Games

02/25/2023
by   Madelyn Gatchel, et al.
0

Nearly all simulation-based games have environment parameters that affect incentives in the interaction but are not explicitly incorporated into the game model. To understand the impact of these parameters on strategic incentives, typical game-theoretic analysis involves selecting a small set of representative values, and constructing and analyzing separate game models for each value. We introduce a novel technique to learn a single model representing a family of closely related games that differ in the number of symmetric players or other ordinal environment parameters. Prior work trains a multi-headed neural network to output mixed-strategy deviation payoffs, which can be used to compute symmetric ε-Nash equilibria. We extend this work by making environment parameters into input dimensions of the regressor, enabling a single model to learn patterns which generalize across the parameter space. For continuous and discrete parameters, our results show that these generalized models outperform existing approaches, achieving better accuracy with far less data. This technique makes thorough analysis of the parameter space more tractable, and promotes analyses that capture relationships between parameters and incentives.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset