Networked Communication for Decentralised Agents in Mean-Field Games

06/05/2023
by   Patrick Benjamin, et al.
0

We introduce networked communication to the mean-field game framework. In particular, we look at oracle-free settings where N decentralised agents learn along a single, non-episodic evolution path of the empirical system, such as we may encounter for a large range of many-agent cooperation problems in the real-world. We provide theoretical evidence that by spreading improved policies through the network in a decentralised fashion, our sample guarantees are upper-bounded by those of the purely independent-learning case. Moreover, we show empirically that our networked method can give faster convergence in practice, while removing the reliance on a centralised controller. We also demonstrate that our decentralised communication architecture brings significant benefits over both the centralised and independent alternatives in terms of robustness and flexibility to unexpected learning failures and changes in population size. For comparison purposes with our new architecture, we modify recent algorithms for the centralised and independent cases to make their practical convergence feasible: while contributing the first empirical demonstrations of these algorithms in our setting of N agents learning along a single system evolution with only local state observability, we additionally display the empirical benefits of our new, networked approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset