Transferring Agent Behaviors from Videos via Motion GANs

11/21/2017
by   Ashley D. Edwards, et al.
0

A major bottleneck for developing general reinforcement learning agents is determining rewards that will yield desirable behaviors under various circumstances. We introduce a general mechanism for automatically specifying meaningful behaviors from raw pixels. In particular, we train a generative adversarial network to produce short sub-goals represented through motion templates. We demonstrate that this approach generates visually meaningful behaviors in unknown environments with novel agents and describe how these motions can be used to train reinforcement learning agents.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset