Learning Deep Parameterized Skills from Demonstration for Re-targetable Visuomotor Control

10/23/2019
by   Jonathan Chang, et al.
0

Robots need to learn skills that can not only generalize across similar problems but also be directed to a specific goal. Previous methods either train a new skill for every different goal or do not infer the specific target in the presence of multiple goals from visual data. We introduce an end-to-end method that represents targetable visuomotor skills as a goal-parameterized neural network policy. By training on an informative subset of available goals with the associated target parameters, we are able to learn a policy that can zero-shot generalize to previously unseen goals. We evaluate our method in a representative 2D simulation of a button-grid and on both button-pressing and peg-insertion tasks on two different physical arms. We demonstrate that our model trained on 33 90 also successfully learn a mapping from target pixel coordinates to a robot policy to complete a specified goal.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset