Learning Real-World Robot Policies by Dreaming

05/20/2018
by   AJ Piergiovanni, et al.
0

Learning to control robots directly based on images is a primary challenge in robotics. However, many existing reinforcement learning approaches require iteratively obtaining millions of samples to learn a policy which can take significant time. In this paper, we focus on the problem of learning real-world robot action policies solely based on a few random off-policy samples. We learn a realistic dreaming model that can emulate samples equivalent to a sequence of images from the actual environment, and make the agent learn action policies by interacting with the dreaming model rather than the real world. We experimentally confirm that our dreaming model can learn realistic policies that transfer to the real-world.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset