Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 2216-2228 |
Journal / Publication | IEEE Transactions on Image Processing |
Volume | 31 |
Online published | 2 Mar 2022 |
Publication status | Published - 2022 |
Link(s)
Abstract
Depth estimation is a fundamental issue in 4-D light field processing and analysis. Although recent supervised learning-based light field depth estimation methods have significantly improved the accuracy and efficiency of traditional optimization-based ones, these methods rely on the training over light field data with ground-truth depth maps which are challenging to obtain or even unavailable for real-world light field data. Besides, due to the inevitable gap (or domain difference) between real-world and synthetic data, they may suffer from serious performance degradation when generalizing the models trained with synthetic data to real-world data. By contrast, we propose an unsupervised learning-based method, which does not require ground-truth depth as supervision during training. Specifically, based on the basic knowledge of the unique geometry structure of light field data, we present an occlusion-aware strategy to improve the accuracy on occlusion areas, in which we explore the angular coherence among subsets of the light field views to estimate initial depth maps, and utilize a constrained unsupervised loss to learn their corresponding reliability for final depth prediction. Additionally, we adopt a multi-scale network with a weighted smoothness loss to handle the textureless areas. Experimental results on synthetic data show that our method can significantly shrink the performance gap between the previous unsupervised method and supervised ones, and produce depth maps with comparable accuracy to traditional methods with obviously reduced computational cost. Moreover, experiments on real-world datasets show that our method can avoid the domain shift problem presented in supervised methods, demonstrating the great potential of our method. The code will be publicly available at https://github.com/jingjin25/LFDE-OccUnNet.
Research Area(s)
- Costs, deep learning, depth estimation, Estimation, Geometry, Graphics processing units, Knowledge engineering, Learning systems, Light field, occlusion, Training, unsupervised learning
Citation Format(s)
Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields. / Jin, Jing; Hou, Junhui.
In: IEEE Transactions on Image Processing, Vol. 31, 2022, p. 2216-2228.
In: IEEE Transactions on Image Processing, Vol. 31, 2022, p. 2216-2228.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review