Skip to main navigation Skip to search Skip to main content

Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

Ziyue Feng, Longlong Jing, Peng Yin, Yingli Tian, Bing Li*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

Self-supervised monocular depth prediction provides a cost-effective solution to obtain the 3D location of each pixel. However, the existing approaches usually lead to unsatisfactory accuracy, which is critical for autonomous robots. In this paper, we propose FusionDepth, a novel two-stage network to advance the self-supervised monocular dense depth learning by leveraging low-cost sparse (e.g. 4-beam) LiDAR. Unlike the existing methods that use sparse LiDAR mainly in a manner of time-consuming iterative post-processing, our model fuses monocular image features and sparse LiDAR features to predict initial depth maps. Then, an efficient feed-forward refine network is further designed to correct the errors in these initial depth maps in pseudo-3D space with real-time performance. Extensive experiments show that our proposed model significantly outperforms all the state-of-the-art self-supervised methods, as well as the sparse-LiDAR-based methods on both self-supervised monocular depth prediction and completion tasks. With the accurate dense depth prediction, our model outperforms the state-of-the-art sparse-LiDAR-based method (Pseudo-LiDAR++ [1]) by more than 68% for the downstream task monocular 3D object detection on the KITTI Leaderboard. Code is available at https://github.com/AutoAILab/FusionDepth. © 2021 Proceedings of Machine Learning Research.
Original languageEnglish
Title of host publicationConference on Robot Learning
EditorsAleksandra Faust, David Hsu, Gerhard Neumann
Pages685-694
Publication statusPublished - Nov 2021
Externally publishedYes
Event5th Conference on Robot Learning (CoRL 2021) - Hybrid, London, United Kingdom
Duration: 8 Nov 202111 Nov 2021
https://proceedings.mlr.press/v164/
https://sites.google.com/robot-learning.org/corl2021

Publication series

NameProceedings of Machine Learning Research
Volume164
ISSN (Print)2640-3498

Conference

Conference5th Conference on Robot Learning (CoRL 2021)
Abbreviated titleCoRL2021
PlaceUnited Kingdom
CityLondon
Period8/11/2111/11/21
Internet address

Research Keywords

  • Depth Prediction
  • Monocular
  • Self-supervised
  • Sparse LiDAR

Fingerprint

Dive into the research topics of 'Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR'. Together they form a unique fingerprint.

Cite this