Probably Approximately Correct Nash Equilibrium Learning
We consider a multi-agent noncooperative game with agents' objective functions being affected by uncertainty. Following a data driven paradigm, we represent uncertainty by means of scenarios and seek a robust Nash equilibrium solution. We first show how to overcome differentiability issues, arising due to the introduction of scenarios, and compute a Nash equilibrium solution in a decentralized manner. We then treat the Nash equilibrium computation problem within the realm of probably approximately correct (PAC) learning. Building upon recent developments in scenario-based optimization, we accompany the computed Nash equilibrium with a priori and a posteriori probabilistic robustness certificates, providing confidence that the computed equilibrium remains unaffected (in probabilistic terms) when a new uncertainty realization is encountered. For a wide class of games, we also show that the computation of the so called compression set - which is at the core of the scenario approach theory - can be directly obtained as a byproduct of the proposed solution methodology. We demonstrate the efficacy of the proposed approach in an electric vehicle charging control problem.
READ FULL TEXT