Skip to main navigation Skip to search Skip to main content

Efficient multiview depth coding optimization based on allowable depth distortion in view synthesis

Yun Zhang, Sam Kwong, Sudeng Hu, Chung-Chieh Jay Kuo

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Depth video is used as the geometrical information of 3D world scenes in 3D view synthesis. Due to the mismatch between the number of depth levels and disparity levels in the view synthesis, the relationship between depth distortion and rendering position error can be modeled as a many-to-one mapping function, in which different depth distortion values might be projected to the same geometrical distortion in the synthesized virtual view image. Based on this property, we present an allowable depth distortion (ADD) model for 3D depth map coding. Then, an ADD-based rate-distortion model is proposed for mode decision and motion/disparity estimation modules aiming at minimizing view synthesis distortion at a given bit rate constraint. In addition, an ADD-based depth bit reduction algorithm is proposed to further reduce the depth bit rate while maintaining the qualities of the synthesized images. Experimental results in intra depth coding show that the proposed overall algorithm achieves Bjontegaard delta peak signal-to-noise ratio gains of 1.58 and 2.68 dB on average for half and integerpixel rendering precisions, respectively. In addition, the proposed algorithms are also highly efficient for inter depth coding when evaluated with different metrics.
Original languageEnglish
Article number2355715
Pages (from-to)4879-4892
JournalIEEE Transactions on Image Processing
Volume23
Issue number11
Online published8 Sept 2014
DOIs
Publication statusPublished - Nov 2014

Research Keywords

  • 3D video
  • Allowable depth distortion
  • Depth coding
  • Depth no-synthesis-error
  • Rate-distortion optimization
  • View synthesis

Fingerprint

Dive into the research topics of 'Efficient multiview depth coding optimization based on allowable depth distortion in view synthesis'. Together they form a unique fingerprint.

Cite this