Light Field Super-resolution via Attention-Guided Fusion of Hybrid Lenses
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | MM'20 |
Subtitle of host publication | Proceedings of the 28th ACM International Conference on Multimedia |
Publisher | Association for Computing Machinery |
Pages | 193-201 |
ISBN (print) | 9781450379885 |
Publication status | Published - Oct 2020 |
Publication series
Name | MM - Proceedings of the ACM International Conference on Multimedia |
---|
Conference
Title | 28th ACM International Conference on Multimedia (MM 2020) |
---|---|
Location | Virtual |
Place | United States |
City | Seattle |
Period | 12 - 16 October 2020 |
Link(s)
Abstract
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses, including a high-resolution camera surrounded by multiple low-resolution cameras. To tackle this challenge, we propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input from two complementary and parallel perspectives. Specifically, one module regresses a spatially consistent intermediate estimation by learning a deep multidimensional and cross-domain feature representation; the other one constructs another intermediate estimation, which maintains the high-frequency textures, by propagating the information of the high-resolution view. We finally leverage the advantages of the two intermediate estimations via the learned attention maps, leading to the final high-resolution LF image. Extensive experiments demonstrate the significant superiority of our approach over state-of-the-art ones. That is, our method not only improves the PSNR by more than 2 dB, but also preserves the LF structure much better. To the best of our knowledge, this is the first end-to-end deep learning method for reconstructing a high-resolution LF image with a hybrid input. We believe our framework could potentially decrease the cost of high-resolution LF data acquisition and also be beneficial to LF data storage and transmission. The code is available at https://github.com/jingjin25/LFhybridSR-Fusion.
Research Area(s)
- Light field, hybrid imaging system, deep learning, attention
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
Light Field Super-resolution via Attention-Guided Fusion of Hybrid Lenses. / Jin, Jing; Hou, Junhui; Chen, Jie et al.
MM'20: Proceedings of the 28th ACM International Conference on Multimedia. Association for Computing Machinery, 2020. p. 193-201 (MM - Proceedings of the ACM International Conference on Multimedia).
MM'20: Proceedings of the 28th ACM International Conference on Multimedia. Association for Computing Machinery, 2020. p. 193-201 (MM - Proceedings of the ACM International Conference on Multimedia).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review