Abstract
Semantic scene understanding in bird-eye view (BEV) plays a crucial role in autonomous driving. A common approach to generating BEV maps from LiDAR point-cloud data involves constructing a pillar-level representation by projecting 3D point clouds onto a 2D plane. This process partially discards spatial geometric information, and produces sparse semantic maps. However, downstream tasks (e.g., trajectory planning and prediction), typically require dense grid-like semantic BEV maps rather than sparse segmentation outputs. To bridge this gap, we propose PointDenseBEV, an end-to-end, distribution-aware feature fusion framework. It takes as input sparse LiDAR point clouds and directly generates dense semantic BEV maps. Spatial geometric information and temporal context are embedded as auxiliary semantic cues within the BEV grid representation to enhance semantic density. Extensive experiments on the SemanticKITTI dataset demonstrate that our method achieves competitive performance compared to existing approaches. © 2025 IEEE.
| Original language | English |
|---|---|
| Title of host publication | 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) |
| Publisher | IEEE |
| Pages | 4123-4129 |
| Number of pages | 7 |
| ISBN (Electronic) | 979-8-3315-4393-8 |
| ISBN (Print) | 979-8-3315-4394-5 |
| DOIs | |
| Publication status | Published - 2025 |
| Event | 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025) - Hangzhou, China Duration: 19 Oct 2025 → 25 Oct 2025 https://www.iros25.org/ |
Publication series
| Name | IEEE International Conference on Intelligent Robots and Systems |
|---|---|
| ISSN (Print) | 2153-0858 |
| ISSN (Electronic) | 2153-0866 |
Conference
| Conference | 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2025) |
|---|---|
| Place | China |
| City | Hangzhou |
| Period | 19/10/25 → 25/10/25 |
| Internet address |
Funding
This work was supported in part by Hong Kong Research Grants Council under Grant 15222523, and in part by City University of Hong Kong under Grants 9610675.
RGC Funding Information
- RGC-funded
Fingerprint
Dive into the research topics of 'Dense Semantic Bird-Eye-View Map Generation from Sparse LiDAR Point Clouds via Distribution-aware Feature Fusion'. Together they form a unique fingerprint.Projects
- 1 Active
-
GRF: Cross-modal Global Localization with a LiDAR and Geo-referenced Aerial Images for Autonomous Vehicles in GNSS-degraded Environments
SUN, Y. (Principal Investigator / Project Coordinator) & HUANG, S. (Co-Investigator)
1/09/23 → …
Project: Research
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver