Assessing the Potential of Classical Q-learning in General Game Playing

10/14/2018
by   Hui Wang, et al.
1

After the recent groundbreaking results of AlphaGo and AlphaZero, we have seen strong interests in deep reinforcement learning and artificial general intelligence (AGI) in game playing. However, deep learning is resource-intensive and the theory is not yet well developed. For small games, simple classical table-based Q-learning might still be the algorithm of choice. General Game Playing (GGP) provides a good testbed for reinforcement learning to research AGI. Q-learning is one of the canonical reinforcement learning methods, and has been used by (Banerjee & Stone, IJCAI 2007) in GGP. In this paper we implement Q-learning in GGP for three small-board games (Tic-Tac-Toe, Connect Four, Hex)[source code: https://github.com/wh1992v/ggp-rl], to allow comparison to Banerjee et al.. We find that Q-learning converges to a high win rate in GGP. For the ϵ-greedy strategy, we propose a first enhancement, the dynamic ϵ algorithm. In addition, inspired by (Gelly & Silver, ICML 2007) we combine online search (Monte Carlo Search) to enhance offline learning, and propose QM-learning for GGP. Both enhancements improve the performance of classical Q-learning. In this work, GGP allows us to show, if augmented by appropriate enhancements, that classical table-based Q-learning can perform well in small games.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset