Abstract
LiDAR-based localization approach is a fundamental module for large-scale navigation tasks, such as last-mile delivery and autonomous driving, and localization robustness highly relies on viewpoints and 3D feature extraction. Our previous work provides a viewpoint-invariant descriptor to deal with viewpoint differences; however, the global descriptor suffers from a low signal-noise ratio in unsupervised clustering, reducing the distinguishable feature extraction ability. In this work, we develop SphereVLAD++, an attention-enhanced viewpoint invariant place recognition method. SphereVLAD++ projects the point cloud on the spherical perspective for each unique area and captures the contextual connections between local features and their dependencies with global 3D geometry distribution. In return, clustered elements within the global descriptor are conditioned on local and global geometries and support the original viewpoint-invariant property of SphereVLAD. In the experiments, we evaluated the localization performance of SphereVLAD++ on both the public KITTI360 dataset and self-generated datasets from the city of Pittsburgh. The experiment results show that SphereVLAD++ outperforms all relative state-of-the-art 3D place recognition methods under small or even totally reversed viewpoint differences and shows 7.06% and 28.15% successful retrieval rates with better than the second best. Low computation requirements and high time efficiency also help its application for low-cost robots. © 2022 IEEE.
| Original language | English |
|---|---|
| Pages (from-to) | 256-263 |
| Journal | IEEE Robotics and Automation Letters |
| Volume | 8 |
| Issue number | 1 |
| Online published | 21 Nov 2022 |
| DOIs | |
| Publication status | Published - Jan 2023 |
| Externally published | Yes |
Research Keywords
- 3D Place Recognition
- Attention
- Viewpoint-invariant Localization
Fingerprint
Dive into the research topics of 'SphereVLAD++: Attention-Based and Signal-Enhanced Viewpoint Invariant Descriptor'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver