View-spatial-temporal post-refinement for view synthesis in 3D video systems

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

10 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)1342-1357
Journal / PublicationSignal Processing: Image Communication
Volume28
Issue number10
Online published29 Aug 2013
Publication statusPublished - Nov 2013

Abstract

Depth image based rendering is one of key techniques to realize view synthesis for three-dimensional television and free-viewpoint television, which provide high quality and immersive experiences to end viewers. However, artifacts of rendered images, including holes caused by occlusion/disclosure and boundary artifacts, may degrade the subjective and objective image quality. To handle these problems and improve the quality of rendered images, we present a novel view-spatial-temporal post-refinement method for view synthesis, in which new hole-filling and boundary artifact removal techniques are proposed. In addition, we propose an optimal reference frame selection algorithm for a better trade-off between the computational complexity and rendered image quality. Experimental results show that the proposed method can achieve a peak signal-to-noise ratio gain of 0.94 dB on average for multiview video test sequences when compared with the benchmark view synthesis reference software. In addition, the subjective quality of the rendered image is also improved. © 2013 Elsevier B.V.

Research Area(s)

  • Boundary artifact, Depth image based rendering, Hole-filling, Multiview video, View synthesis

Citation Format(s)

View-spatial-temporal post-refinement for view synthesis in 3D video systems. / Zhu, Linwei; Zhang, Yun; Yu, Mei et al.
In: Signal Processing: Image Communication, Vol. 28, No. 10, 11.2013, p. 1342-1357.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review