Light Field Spatial Super-Resolution Using Deep Efficient Spatial-Angular Separable Convolution

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

158 Scopus Citations
View graph of relations

Author(s)

  • Henry Wing Fung Yeung
  • Xiaoming Chen
  • Jie Chen
  • Zhibo Chen
  • Yuk Ying Chung

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)2319-2330
Journal / PublicationIEEE Transactions on Image Processing
Volume28
Issue number5
Online published5 Dec 2018
Publication statusPublished - May 2019

Abstract

Light field (LF) photography is an emerging paradigm for capturing more immersive representations of the real world. However, arising from the inherent tradeoff between the angular and spatial dimensions, the spatial resolution of LF images captured by commercial micro-lens-based LF cameras is significantly constrained. In this paper, we propose effective and efficient end-to-end convolutional neural network models for spatially super-resolving LF images. Specifically, the proposed models have an hourglass shape, which allows feature extraction to be performed at the low-resolution level to save both the computational and memory costs. To fully make use of the 4D structure information of LF data in both the spatial and angular domains, we propose to use 4D convolution to characterize the relationship among pixels. Moreover, as an approximation of 4D convolution, we also propose to use spatial-angular separable (SAS) convolutions for more computationally and memory-efficient extraction of spatial-angular joint features. Extensive experimental results on 57 test LF images with various challenging natural scenes show significant advantages from the proposed models over the state-of-the-art methods. That is, an average PSNR gain of more than 3.0 dB and better visual quality are achieved, and our methods preserve the LF structure of the super-resolved LF images better, which is highly desirable for subsequent applications. In addition, the SAS convolution-based model can achieve three times speed up with only negligible reconstruction quality decrease when compared with the 4D convolution-based one. The source code of our method is available online.

Research Area(s)

  • Light field, super-resolution, convolutional neural networks

Citation Format(s)

Light Field Spatial Super-Resolution Using Deep Efficient Spatial-Angular Separable Convolution. / Yeung, Henry Wing Fung; Hou, Junhui; Chen, Xiaoming et al.
In: IEEE Transactions on Image Processing, Vol. 28, No. 5, 05.2019, p. 2319-2330.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review