Timely Fusion of Surround Radar/Lidar for Object Detection in Autonomous Driving Systems

Wenjing Xie, Tao Hu, Neiwen Ling, Guoliang Xing, Chun Jason Xue, Nan Guan

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

5 Citations (Scopus)

Abstract

Fusing Radar and Lidar sensor data can fully utilize their complementary advantages and provide more accurate reconstruction of the surrounding for autonomous driving systems. Surround Radar/Lidar can provide 360° view sampling with the minimal cost, which are promising sensing hardware solutions for autonomous driving systems. However, due to the intrinsic physical constraints, the rotating speed of surround Radar, and thus the frequency to generate Radar data frames, is much lower than surround Lidar. Existing Radar/Lidar fusion methods have to work at the low frequency of surround Radar, which cannot meet the high responsiveness requirement of autonomous driving systems. This paper develops techniques to fuse surround Radar/Lidar with working frequency only limited by the faster surround Lidar instead of the slower surround Radar, based on widely-used object detection model called MVDNet. The basic idea of our approach is simple: we let MVDNet work with temporally unaligned data from Radar/Lidar, so that fusion can take place at any time when a new Lidar data frame arrives, instead of waiting for the slow Radar data frame. However, directly applying MVDNet to temporally unaligned Radar/Lidar data greatly degrades its object detection accuracy. The key information revealed in this paper is that we can achieve high output frequency with little accuracy loss by enhancing the training procedure to explore the temporal redundancy in MVDNet so that it can tolerate the temporal unalignment of input data. We explore several different ways of training enhancement and compare them quantitatively with experiments. © 2024 IEEE.
Original languageEnglish
Title of host publicationProceedings - 2024 IEEE 30th International Conference on Embedded and Real-Time Computing Systems and Applications, RTCSA 2024
Place of PublicationLos Alamitos, Calif.
PublisherIEEE
Pages31-36
ISBN (Electronic)9798350387957
ISBN (Print)9798350387964
DOIs
Publication statusPublished - 2024
Event30th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2024) - Sokcho, Korea, Republic of
Duration: 21 Aug 202423 Aug 2024
https://rtcsa2024.github.io/

Publication series

NameProceedings - IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, RTCSA
ISSN (Print)2325-1271
ISSN (Electronic)2325-1301

Conference

Conference30th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2024)
Abbreviated titleIEEE RTCSA 2024
PlaceKorea, Republic of
CitySokcho
Period21/08/2423/08/24
Internet address

Bibliographical note

Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s).

Funding

This work is partially supported by Hong Kong GRF under grant no. 15206221 and 11208522.

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Timely Fusion of Surround Radar/Lidar for Object Detection in Autonomous Driving Systems'. Together they form a unique fingerprint.

Cite this