Learning Spatial-angular Fusion for Compressive Light Field Imaging in a Cycle-consistent Framework
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | MM '21 |
Subtitle of host publication | Proceedings of the 29th ACM International Conference on Multimedia |
Place of Publication | New York, NY |
Publisher | Association for Computing Machinery |
Pages | 4613-4621 |
ISBN (print) | 9781450386517 |
Publication status | Published - Oct 2021 |
Publication series
Name | MM - Proceedings of the ACM International Conference on Multimedia |
---|
Conference
Title | 29th ACM International Conference on Multimedia (MM 2021) |
---|---|
Location | Hybrid |
Place | China |
City | Chengdu |
Period | 20 - 24 October 2021 |
Link(s)
Abstract
This paper investigates the 4-D light field (LF) reconstruction from 2-D measurements captured by the coded aperture camera. To tackle such an ill-posed inverse problem, we propose a cycle-consistent reconstruction network (CR-Net). To be specific, based on the intrinsic linear imaging model of the coded aperture, CR-Net reconstructs an LF through progressively eliminating the residuals between the projected measurements from the reconstructed LF and input measurements. Moreover, to address the crucial issue of extracting representative features from high-dimensional LF data efficiently and effectively, we formulate the problem in a probability space and propose to approximate a posterior distribution of a set of carefully-defined LF processing events, including both layer-wise spatial-angular feature extraction and network-level feature aggregation. Through droppath from a densely-connected template network, we derive an adaptively learned spatial-angular fusion strategy, which is sharply contrasted with existing manners that combine spatial and angular features empirically. Extensive experiments on both simulated measurements and measurements by a real coded aperture camera demonstrate the significant advantage of our method over state-of-the-art ones, i.e., our method improves the reconstruction quality by 4.5 dB.
Research Area(s)
- Light Field, Coded Aperture, Deep Learning, Probability Space
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
Learning Spatial-angular Fusion for Compressive Light Field Imaging in a Cycle-consistent Framework. / Lyu, Xianqiang; Zhu, Zhiyu; Guo, Mantang et al.
MM '21: Proceedings of the 29th ACM International Conference on Multimedia. New York, NY: Association for Computing Machinery, 2021. p. 4613-4621 (MM - Proceedings of the ACM International Conference on Multimedia).
MM '21: Proceedings of the 29th ACM International Conference on Multimedia. New York, NY: Association for Computing Machinery, 2021. p. 4613-4621 (MM - Proceedings of the ACM International Conference on Multimedia).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review