Tactile Grasp Refinement using Deep Reinforcement Learning and Analytic Grasp Stability Metrics

09/23/2021
by   Alexander Koenig, et al.
11

Reward functions are at the heart of every reinforcement learning (RL) algorithm. In robotic grasping, rewards are often complex and manually engineered functions that do not rely on well-justified physical models from grasp analysis. This work demonstrates that analytic grasp stability metrics constitute powerful optimization objectives for RL algorithms that refine grasps on a three-fingered hand using only tactile and joint position information. We outperform a binary-reward baseline by 42.9 combination of geometric and force-agnostic grasp stability metrics yields the highest average success rates of 95.4 62.3 rotational errors between 0 and 14 degrees. In a second experiment, we show that grasp refinement algorithms trained with contact feedback (contact positions, normals, and forces) perform up to 6.6 receives no tactile information.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset