Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

04/14/2018
by   Douglas Morrison, et al.
0

This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping. Our proposed Generative Grasping Convolutional Neural Network (GG-CNN) predicts the quality of grasps at every pixel. This one-to-one mapping from a depth image overcomes limitations of current deep learning grasping techniques, specifically by avoiding discrete sampling of grasp candidates and long computation times. Additionally, our GG-CNN is orders of magnitude smaller while detecting stable grasps with equivalent performance to current state-of-the-art techniques. The lightweight and single-pass generative nature of our GG-CNN allows for closed-loop control at up to 50Hz, enabling accurate grasping in non-static environments where objects move and in the presence of robot control inaccuracies. In our real-world tests, we achieve an 83 unseen objects with adversarial geometry and 88 that are moved during the grasp attempt. We also achieve 81 grasping in dynamic clutter.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset