Hashing Over Predicted Future Frames for Informed Exploration of Deep Reinforcement Learning

07/03/2017
by   Haiyan Yin, et al.
0

In reinforcement learning (RL) tasks, an efficient exploration mechanism should be able to encourage an agent to take actions that lead to less frequent states which may yield higher accumulative future return. However, both knowing about the future and evaluating the frequentness of states are non-trivial tasks, especially for deep RL domains, where a state is represented by high-dimensional image frames. In this paper, we propose a novel informed exploration framework for deep RL tasks, where we build the capability for a RL agent to predict over the future transitions and evaluate the frequentness for the predicted future frames in a meaningful manner. To this end, we train a deep prediction model to generate future frames given a state-action pair, and a convolutional autoencoder model to generate deep features for conducting hashing over the seen frames. In addition, to utilize the counts derived from the seen frames to evaluate the frequentness for the predicted frames, we tackle the challenge of making the hash codes for the predicted future frames to match with their corresponding seen frames. In this way, we could derive a reliable metric for evaluating the novelty of the future direction pointed by each action, and hence inform the agent to explore the least frequent one. We use Atari 2600 games as the testing environment and demonstrate that the proposed framework achieves significant performance gain over a state-of-the-art informed exploration approach in most of the domains.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset