Content-aware Warping for View Synthesis

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

10 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)9486-9503
Journal / PublicationIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume45
Issue number8
Online published6 Feb 2023
Publication statusPublished - Aug 2023

Abstract

Existing image-based rendering methods usually adopt depth-based image warping operation to synthesize novel views. In this paper, we reason the essential limitations of the traditional warping operation to be the limited neighborhood and only distance-based interpolation weights. To this end, we propose content-aware warping, which adaptively learns the interpolation weights for pixels of a relatively large neighborhood from their contextual information via a lightweight neural network. Based on this learnable warping module, we propose a new end-to-end learning-based framework for novel view synthesis from a set of input source views, in which two additional modules, namely confidence-based blending and feature-assistant spatial refinement, are naturally proposed to handle the occlusion issue and capture the spatial correlation among pixels of the synthesized view, respectively. Besides, we also propose a weight-smoothness loss term to regularize the network. Experimental results on light field datasets with wide baselines and multi-view datasets show that the proposed method significantly outperforms state-of-the-art methods both quantitatively and visually. The source code will be publicly available at https://github.com/MantangGuo/CW4VS. © 2023 IEEE.

Research Area(s)

  • View synthesis, light field, deep learning, image warping, depth/disparity

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.