To be a fast adaptive learner: using game history to defeat opponents

05/17/2021
by   Guangzhao Cheng, et al.
0

In many real-world games, such as traders repeatedly bargaining with customers, it is very hard for a single AI trader to make good deals with various customers in a few turns, since customers may adopt different strategies even the strategies they choose are quite simple. In this paper, we model this problem as fast adaptive learning in the finitely repeated games. We believe that past game history plays a vital role in such a learning procedure, and therefore we propose a novel framework (named, F3) to fuse the past and current game history with an Opponent Action Estimator (OAE) module that uses past game history to estimate the opponent's future behaviors. The experiments show that the agent trained by F3 can quickly defeat opponents who adopt unknown new strategies. The F3 trained agent obtains more rewards in a fixed number of turns than the agents that are trained by deep reinforcement learning. Further studies show that the OAE module in F3 contains meta-knowledge that can even be transferred across different games.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset