Deep Spatial-angular Regularization for Light Field Imaging, Denoising, and Super-resolution

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

44 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)6094-6110
Number of pages18
Journal / PublicationIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume44
Issue number10
Online published8 Jun 2021
Publication statusPublished - Oct 2022

Abstract

Coded aperture is a promising approach for capturing the 4-D light field (LF), in which the 4-D data are compressively modulated into 2-D coded measurements that are further decoded by reconstruction algorithms. The bottleneck lies in the reconstruction algorithms, resulting in rather limited reconstruction quality. To tackle this challenge, we propose a novel learning-based framework for the reconstruction of high-quality LFs from acquisitions via learned coded apertures. The proposed method incorporates the measurement observation into the deep learning framework elegantly to avoid relying entirely on data-driven priors for LF reconstruction. Specifically, we first formulate the compressive LF reconstruction as an inverse problem with an implicit regularization term. Then, we construct the regularization term with a deep efficient spatial-angular separable convolutional sub-network in the form of local and global residual learning to comprehensively explore the signal distribution free from the limited representation ability and inefficiency of deterministic mathematical modeling. Furthermore, we extend this pipeline to LF denoising and spatial super-resolution, which could be considered as variants of coded aperture imaging equipped with different degradation matrices. Extensive experimental results demonstrate that the proposed methods outperform state-of-the-art approaches to a significant extent both quantitatively and qualitatively, i.e., the reconstructed LFs not only achieve much higher PSNR/SSIM but also preserve the LF parallax structure better on both real and synthetic LF benchmarks. The code will be publicly available at https://github.com/MantangGuo/DRLF.

Research Area(s)

  • Apertures, Cameras, Coded Aperture, Deep Learning, Denoising, Depth, Image reconstruction, Imaging, Light Field, Noise reduction, Observation Model, Optimization, Reconstruction algorithms, Sensors, Spatial Super-resolution

Citation Format(s)