Maximum Entropy Inverse Reinforcement Learning for Mean Field Games

04/29/2021
by   Yang Chen, et al.
0

Mean field games (MFG) facilitate the otherwise intractable reinforcement learning (RL) in large-scale multi-agent systems (MAS), through reducing interplays among agents to those between a representative individual agent and the mass of the population. While, RL agents are notoriously prone to unexpected behaviours due to reward mis-specification. This problem is exacerbated by an expanding scale of MAS. Inverse reinforcement learning (IRL) provides a framework to automatically acquire proper reward functions from expert demonstrations. Extending IRL to MFG, however, is challenging due to the complex notion of mean-field-type equilibria and the coupling between agent-level and population-level dynamics. To this end, we propose mean field inverse reinforcement learning (MFIRL), a novel model-free IRL framework for MFG. We derive the algorithm based on a new equilibrium concept that incorporates entropy regularization, and the maximum entropy IRL framework. Experimental results on simulated environments demonstrate that MFIRL is sample efficient and can accurately recover the ground-truth reward functions, compared to the state-of-the-art method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset