RPRG: Toward Real-time Robotic Perception, Reasoning and Grasping with One Multi-task Convolutional Neural Network

09/19/2018
by   Hanbo Zhang, et al.
0

Autonomous robotic grasp plays an important role in intelligent robotics. However, it is challenging due to: (1) robotic grasp is a comprehensive task involving perception, planning and control; (2) autonomous robotic grasp in complex scenarios requires reasoning ability. In this paper, we propose a multi-task convolutional neural network for Robotic Perception, Reasoning and Grasping (RPRG), which can help robot find the target, make the plan for grasping and finally grasp the target step by step in object stacking scenes. We integrate vision-based robotic grasp detection and visual manipulation relationship reasoning in one single deep network and build the autonomous robotic grasp system. The proposed network has state-of-the-art performance in both tasks. Experiments demonstrate that with our model, Baxter robot can autonomously grasp the target with a success rate of 94.2 object cluttered scenes, familiar stacking scenes and complex stacking scenes respectively at a speed of 6.5 FPS for each detection.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset