Learning Multi-Step Robotic Tasks from Observation

06/29/2018
by   Wonjoon Goo, et al.
0

Due to burdensome data requirements, learning from demonstration often falls short of its promise to allow users to quickly and naturally program robots. Demonstrations are inherently ambiguous and incomplete, making a correct generalization to unseen situations difficult without a large number of demonstrations in varying conditions. By contrast, humans are often able to learn complex tasks from a single demonstration (typically observations without action labels) by leveraging context learned over a lifetime. Inspired by this capability, we aim to enable robots to perform one-shot learning of multi-step tasks from observation by leveraging auxiliary video data as context. Our primary contribution is a novel action localization algorithm that identifies clips of activities in auxiliary videos that match the activities in a user-segmented demonstration, providing additional examples of each. While this auxiliary video data could be used in multiple ways for learning, we focus on an inverse reinforcement learning setting. We empirically show that across several tasks, robots can learn multi-step tasks more effectively from videos with localized actions, compared to unsegmented videos.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset