SVT-Net: A Super Light-Weight Network for Large Scale Place Recognition using Sparse Voxel Transformers

05/01/2021
by   Zhaoxin Fan, et al.
1

Point cloud-based large scale place recognition is fundamental for many applications like Simultaneous Localization and Mapping (SLAM). Though previous methods have achieved good performance by learning short range local features, long range contextual properties have long been neglected. And model size has became a bottleneck for further popularizing. In this paper, we propose model SVTNet, a super light-weight network, for large scale place recognition. In our work, building on top of the highefficiency 3D Sparse Convolution (SP-Conv), an Atom-based Sparse Voxel Transformer (ASVT) and a Cluster-based Sparse Voxel Transformer (CSVT) are proposed to learn both short range local features and long range contextual features. Consisting of ASVT and CSVT, our SVT-Net can achieve state-of-art performance in terms of both accuracy and speed with a super-light model size (0.9M). Two simplified version of SVT-Net named ASVT-Net and CSVT-Net are also introduced, which also achieve state-of-art performances while further reduce the model size to 0.8M and 0.4M respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset