Imitating Human Search Strategies for Assembly

09/13/2018
by   Dennis Ehlers, et al.
0

We present a Learning from Demonstration method for teaching robots to perform search strategies imitated from humans for scenarios where, for example, assembly tasks fail due to position uncertainty. The method utilizes human demonstrations to learn both a state invariant dynamics model and an exploration distribution that captures the search area covered by the demonstrator. We present two alternative algorithms for computing a search trajectory from the exploration distribution, one based on sampling and another based on deterministic ergodic control. We augment the search trajectory with forces learnt through the dynamics model to enable searching both in force and position domains. An impedance controller with superposed forces is used for reproducing the learnt strategy. We experimentally evaluate the method on a KUKA LWR4+ performing a 2D peg-in-hole and a 3D electricity socket tasks. Results show that the proposed method can with only few human demonstrations learn to complete the search task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset