Speeding up reinforcement learning by combining attention and agency features

12/29/2019
by   Berkay Demirel, et al.
0

When playing video-games we immediately detect which entity we control and we center the attention towards it to focus the learning and reduce its dimensionality. Reinforcement Learning (RL) has been able to deal with big state spaces, including states derived from pixel images in Atari games, but the learning is slow, depends on the brute force mapping from the global state to the action values (Q-function), thus its performance is severely affected by the dimensionality of the state and cannot be transferred to other games or other parts of the same game. We propose different transformations of the input state that combine attention and agency detection mechanisms which both have been addressed separately in RL but not together to our knowledge. We propose and benchmark different architectures including both global and local agency centered versions of the state and also including summaries of the surroundings. Results suggest that even a redundant global-local state network can learn faster than the global alone. Summarized versions of the state look promising to achieve input-size independence learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset