Learning Spatial-angular Fusion for Compressive Light Field Imaging in a Cycle-consistent Framework

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationMM '21
Subtitle of host publicationProceedings of the 29th ACM International Conference on Multimedia
Place of PublicationNew York, NY
PublisherAssociation for Computing Machinery
Pages4613-4621
ISBN (Print)9781450386517
Publication statusPublished - Oct 2021

Publication series

NameMM - Proceedings of the ACM International Conference on Multimedia

Conference

Title29th ACM International Conference on Multimedia (MM 2021)
LocationHybrid (Onsite and Virtual)
PlaceChina
CityChengdu
Period20 - 24 October 2021

Abstract

This paper investigates the 4-D light field (LF) reconstruction from 2-D measurements captured by the coded aperture camera. To tackle such an ill-posed inverse problem, we propose a cycle-consistent reconstruction network (CR-Net). To be specific, based on the intrinsic linear imaging model of the coded aperture, CR-Net reconstructs an LF through progressively eliminating the residuals between the projected measurements from the reconstructed LF and input measurements. Moreover, to address the crucial issue of extracting representative features from high-dimensional LF data efficiently and effectively, we formulate the problem in a probability space and propose to approximate a posterior distribution of a set of carefully-defined LF processing events, including both layer-wise spatial-angular feature extraction and network-level feature aggregation. Through droppath from a densely-connected template network, we derive an adaptively learned spatial-angular fusion strategy, which is sharply contrasted with existing manners that combine spatial and angular features empirically. Extensive experiments on both simulated measurements and measurements by a real coded aperture camera demonstrate the significant advantage of our method over state-of-the-art ones, i.e., our method improves the reconstruction quality by 4.5 dB.

Research Area(s)

  • Light Field, Coded Aperture, Deep Learning, Probability Space

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Citation Format(s)

Learning Spatial-angular Fusion for Compressive Light Field Imaging in a Cycle-consistent Framework. / Lyu, Xianqiang; Zhu, Zhiyu; Guo, Mantang; Jin, Jing; Hou, Junhui; Zeng, Huanqiang.

MM '21: Proceedings of the 29th ACM International Conference on Multimedia. New York, NY : Association for Computing Machinery, 2021. p. 4613-4621 (MM - Proceedings of the ACM International Conference on Multimedia).

Research output: Chapters, Conference Papers, Creative and Literary Works (RGC: 12, 32, 41, 45)32_Refereed conference paper (with ISBN/ISSN)peer-review