Predicting Target Feature Configuration of Non-stationary Objects for Grasping with Image-Based Visual Servoing
In this paper we consider the problem of the final approach stage of closed-loop grasping where RGB-D cameras are no longer able to provide valid depth information. This is essential for grasping non-stationary objects; a situation where current robotic grasping controllers fail. We predict the image-plane coordinates of observed image features at the final grasp pose and use image-based visual servoing to guide the robot to that pose. Image-based visual servoing is a well established control technique that moves a camera in 3D space so as to drive the image-plane feature configuration to some goal state. In previous works the goal feature configuration is assumed to be known but for some applications this may not be feasible, if for example the motion is being performed for the first time with respect to a scene. Our proposed method provides robustness with respect to scene motion during the final phase of grasping as well as to errors in the robot kinematic control. We provide experimental results in the context of dynamic closed-loop grasping.
READ FULL TEXT