Unsupervised learning of depth and camera pose with feature map warping

Ente Guo, Zhifeng Chen*, Yanlin Zhou, Dapeng Oliver Wu

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

4 Citations (Scopus)
43 Downloads (CityUHK Scholars)

Abstract

Estimating the depth of image and egomotion of agent are important for autonomous and robot in understanding the surrounding environment and avoiding collision. Most existing unsupervised methods estimate depth and camera egomotion by minimizing photometric error between adjacent frames. However, the photometric consistency sometimes does not meet the real situation, such as brightness change, moving objects and occlusion. To reduce the influence of brightness change, we propose a feature pyramid matching loss (FPML) which captures the trainable feature error between a current and the adjacent frames and therefore it is more robust than photometric error. In addition, we propose the occlusion-aware mask (OAM) network which can indicate occlusion according to change of masks to improve estimation accuracy of depth and camera pose. The experimental results verify that the proposed unsupervised approach is highly competitive against the state-of-the-art methods, both qualitatively and quantitatively. Specifically, our method reduces absolute relative error (Abs Rel) by 0.017–0.088.
Original languageEnglish
Article number923
JournalSensors (Switzerland)
Volume21
Issue number3
Online published30 Jan 2021
DOIs
Publication statusPublished - Feb 2021
Externally publishedYes

Research Keywords

  • Feature pyramid matching loss
  • Monocular depth estimation
  • Occlusion-aware mask network
  • Single camera egomotion

Publisher's Copyright Statement

  • This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/

Fingerprint

Dive into the research topics of 'Unsupervised learning of depth and camera pose with feature map warping'. Together they form a unique fingerprint.

Cite this