From RGB images to Dynamic Movement Primitives for planar tasks
DMP have been extensively applied in various robotic tasks thanks to their generalization and robustness properties. However, the successful execution of a given task may necessitate the use of different motion patterns that take into account not only the initial and target position but also features relating to the overall structure and layout of the scene. To make DMP applicable in wider range of tasks and further automate their use, we design in this work a framework combining deep residual networks with DMP, that can encapsulate different motion patterns of a planar task, provided through human demonstrations on the RGB image plane. We can then automatically infer from new raw RGB visual input the appropriate DMP parameters, i.e. the weights that determine the motion pattern and the initial/target positions. We experimentally validate our method in the task of unveiling the stem of a grape-bunch from occluding leaves using on a mock-up vine setup and compare it to another SoA method for inferring DMP from images.
READ FULL TEXT