On the Approximation of Cooperative Heterogeneous Multi-Agent Reinforcement Learning (MARL) using Mean Field Control (MFC)

09/09/2021
by   Washim Uddin Mondal, et al.
0

Mean field control (MFC) is an effective way to mitigate the curse of dimensionality of cooperative multi-agent reinforcement learning (MARL) problems. This work considers a collection of N_pop heterogeneous agents that can be segregated into K classes such that the k-th class contains N_k homogeneous agents. We aim to prove approximation guarantees of the MARL problem for this heterogeneous system by its corresponding MFC problem. We consider three scenarios where the reward and transition dynamics of all agents are respectively taken to be functions of (1) joint state and action distributions across all classes, (2) individual distributions of each class, and (3) marginal distributions of the entire population. We show that, in these cases, the K-class MARL problem can be approximated by MFC with errors given as e_1=𝒪(√(|𝒳|)+√(|𝒰|)/N_pop∑_k√(N_k)), e_2=𝒪([√(|𝒳|)+√(|𝒰|)]∑_k1/√(N_k)) and e_3=𝒪([√(|𝒳|)+√(|𝒰|)][A/N_pop∑_k∈[K]√(N_k)+B/√(N_pop)]), respectively, where A, B are some constants and |𝒳|,|𝒰| are the sizes of state and action spaces of each agent. Finally, we design a Natural Policy Gradient (NPG) based algorithm that, in the three cases stated above, can converge to an optimal MARL policy within 𝒪(e_j) error with a sample complexity of 𝒪(e_j^-3), j∈{1,2,3}, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset