The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning

12/12/2012
by   Sarah Finney, et al.
0

Most reinforcement learning methods operate on propositional representations of the world state. Such representations are often intractably large and generalize poorly. Using a deictic representation is believed to be a viable alternative: they promise generalization while allowing the use of existing reinforcement-learning methods. Yet, there are few experiments on learning with deictic representations reported in the literature. In this paper we explore the effectiveness of two forms of deictic representation and a naïve propositional representation in a simple blocks-world domain. We find, empirically, that the deictic representations actually worsen learning performance. We conclude with a discussion of possible causes of these results and strategies for more effective learning in domains with objects.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset