G2L-Net: Global to Local Network for Real-time 6D Pose Estimation with Embedding Vector Features

03/24/2020
by   Wei Chen, et al.
1

In this paper, we propose a novel real-time 6D object pose estimation framework, named G2L-Net. Our network operates on point clouds from RGB-D detection in a divide-and-conquer fashion. Specifically, our network consists of three steps. First, we extract the coarse object point cloud from the RGB-D image by 2D detection. Second, we feed the coarse object point cloud to a translation localization network to perform 3D segmentation and object translation prediction. Third, via the predicted segmentation and translation, we transfer the fine object point cloud into a local canonical coordinate, in which we train a rotation localization network to estimate initial object rotation. In the third step, we define point-wise embedding vector features to capture viewpoint-aware information. In order to calculate more accurate rotation, we adopt a rotation residual estimator to estimate the residual between initial rotation and ground truth, which can boost initial pose estimation performance. Our proposed G2L-Net is real-time despite the fact multiple steps are stacked. Extensive experiments on two benchmark datasets show that the proposed method achieves state-of-the-art performance in terms of both accuracy and speed.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset