Multi-agent Reinforcement Learning Improvement in a Dynamic Environment Using Knowledge Transfer

Cooperative multi-agent systems are being widely used in variety of areas. Interaction between agents would bring positive points, including reducing costs of operating, high scalability, and facilitating parallel processing. These systems pave the way for handling large-scale, unknown, and dynamic environments. However, learning in these environments has become a prominent challenge in different applications. These challenges include the effect of size of search space on learning time, inappropriate cooperation among agents, and the lack of proper coordination among agents' decisions. Moreover, reinforcement learning algorithms may suffer from long time of convergence in these problems. In this paper, a communication framework using knowledge transfer concepts is introduced to address such challenges in the herding problem with large state space. To handle the problems of convergence, knowledge transfer has been utilized that can significantly increase the efficiency of reinforcement learning algorithms. Coordination between the agents is carried out through a head agent in each group of agents and a coordinator agent respectively. The results demonstrate that this framework could indeed enhance the speed of learning and reduce convergence time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset