Block-matching translation and zoom motion-compensated prediction by sub-sampling

Ka-Man Wong, Lai-Man Po, Kwok-Wai Cheung, Ka-Ho Ng

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

4 Citations (Scopus)

Abstract

In modern video coding standards, motion compensated prediction (MCP) plays a key role to achieve video compression efficiency. Most of them make use of block matching techniques and assume the motions are pure translational. Some attempts toward a more general motion model usually too complex to be practical in near future. In this paper, a new Block-Matching Translation and Zoom Motion-Compensated Prediction (BTZMP) is proposed to extend the pure translational model to a more general model with zooming in a practical way. It adopts the camera zooming and object motions that becomes zooming while projected on the video frames. The proposed BTZMP significantly improve motion compensated prediction. Experimental results show that BTZMP can give prediction gain up to 1.09dB compared to conventional sub-pixel block-matching MCP. In addition, BTZMP can be incorporated with Multiple Reference Frames (MRF) technique to give extra improvement, evidentially by the prediction gain ranging up to 2.08dB in the empirical simulations. ©2009 IEEE.
Original languageEnglish
Title of host publicationProceedings - International Conference on Image Processing, ICIP
PublisherIEEE Computer Society
Pages1597-1600
ISBN (Print)9781424456543
DOIs
Publication statusPublished - 2009
Event2009 IEEE International Conference on Image Processing, ICIP 2009 - Cairo, Egypt
Duration: 7 Nov 200910 Nov 2009

Publication series

Name
ISSN (Print)1522-4880

Conference

Conference2009 IEEE International Conference on Image Processing, ICIP 2009
PlaceEgypt
CityCairo
Period7/11/0910/11/09

Research Keywords

  • Motion compensated prediction
  • Translation and zoom motion
  • Video coding

Fingerprint

Dive into the research topics of 'Block-matching translation and zoom motion-compensated prediction by sub-sampling'. Together they form a unique fingerprint.

Cite this