Skip to main navigation Skip to search Skip to main content

View-spatial-temporal post-refinement for view synthesis in 3D video systems

Linwei Zhu, Yun Zhang*, Mei Yu, Gangyi Jiang, Sam Kwong

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Depth image based rendering is one of key techniques to realize view synthesis for three-dimensional television and free-viewpoint television, which provide high quality and immersive experiences to end viewers. However, artifacts of rendered images, including holes caused by occlusion/disclosure and boundary artifacts, may degrade the subjective and objective image quality. To handle these problems and improve the quality of rendered images, we present a novel view-spatial-temporal post-refinement method for view synthesis, in which new hole-filling and boundary artifact removal techniques are proposed. In addition, we propose an optimal reference frame selection algorithm for a better trade-off between the computational complexity and rendered image quality. Experimental results show that the proposed method can achieve a peak signal-to-noise ratio gain of 0.94 dB on average for multiview video test sequences when compared with the benchmark view synthesis reference software. In addition, the subjective quality of the rendered image is also improved. © 2013 Elsevier B.V.
Original languageEnglish
Pages (from-to)1342-1357
JournalSignal Processing: Image Communication
Volume28
Issue number10
Online published29 Aug 2013
DOIs
Publication statusPublished - Nov 2013

Research Keywords

  • Boundary artifact
  • Depth image based rendering
  • Hole-filling
  • Multiview video
  • View synthesis

Fingerprint

Dive into the research topics of 'View-spatial-temporal post-refinement for view synthesis in 3D video systems'. Together they form a unique fingerprint.

Cite this