Deep Learning-Based Monocular 3D Object Detection with Refinement of Depth Information
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Article number | 2576 |
Journal / Publication | Sensors |
Volume | 22 |
Issue number | 7 |
Online published | 28 Mar 2022 |
Publication status | Published - Apr 2022 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85127196285&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(5494b9da-9deb-4e2f-9753-a71271ca4fa3).html |
Abstract
Recently, the research on monocular 3D target detection based on pseudo-LiDAR data has made some progress. In contrast to LiDAR-based algorithms, the robustness of pseudo-LiDAR methods is still inferior. After conducting in-depth experiments, we realized that the main limitations are due to the inaccuracy of the target position and the uncertainty in the depth distribution of the foreground target. These two problems arise from the inaccurate depth estimation. To deal with the aforementioned problems, we propose two innovative solutions. The first is a novel method based on joint image segmentation and geometric constraints, used to predict the target depth and provide the depth prediction confidence measure. The predicted target depth is fused with the overall depth of the scene and results in the optimal target position. For the second, we utilize the target scale, normalized with the Gaussian function, as a priori information. The uncertainty of depth distribution, which can be visualized as long-tail noise, is reduced. With the refined depth information, we convert the optimized depth map into the point cloud representation, called a pseudo-LiDAR point cloud. Finally, we input the pseudo-LiDAR point cloud to the LiDAR-based algorithm to detect the 3D target. We conducted extensive experiments on the challenging KITTI dataset. The results demonstrate that our proposed framework outperforms various state-of-the-art methods by more than 12.37% and 5.34% on the easy and hard settings of the KITTI validation subset, respectively. On the KITTI test set, our framework also outperformed state-of-the-art methods by 5.1% and 1.76% on the easy and hard settings, respectively.
Research Area(s)
- 3D object detection, autonomous driving, deep learning, depth estimation, monocular image, point cloud
Citation Format(s)
Deep Learning-Based Monocular 3D Object Detection with Refinement of Depth Information. / Hu, Henan; Zhu, Ming; Li, Muyu et al.
In: Sensors, Vol. 22, No. 7, 2576, 04.2022.
In: Sensors, Vol. 22, No. 7, 2576, 04.2022.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available