Block-Matching Translation and Zoom Motion-Compensated Prediction For Next Generation Video Coding Standards
Project: Research
Researcher(s)
Description
In modern video coding standards like H.26X and MPEG-X, motion compensated prediction (MCP) plays a vital role to achieve high compression efficiency. All of them make use of block-matching translation motion compensation mainly due to its efficiency and simplicity in both hardware and software implementation. In the last two decades, the coding efficiency gains of the video coding standards were mainly obtained by improving this pure translational motion model based MCP accuracy with use of sub-pixel motion vector, multiple reference frames and variable block size. Translational motion vectors, however, cannot effectively model complex motion such as zoom, rotation and local deformation. Some attempts for a more general motion model is usually too complex to be practical for the near future.In this project, block-matching translation and zoom motion-compensated prediction (BTZMP) techniques will be developed and are used to extend the dominate translation motion model. Translation model assumes rigid body moving in a 2-dimensional plane. However, practical video contents are better modeled as rigid bodies moving in 3- dimensional space. By introducing combining translation and zoom motion components, the new BTZMP can better suit the real motion. In addition, this model is simple and, by using block-matching and multi-frame implementation, it can be easily deployed in the existing video coding framework. The result of this project may possible to give significant impact for further enhancing the rate-distortion performance of the current video coding standards. This can be the core technology for the next generation of video coding standard of H.265.Detail(s)
Project number | 9041501 |
---|---|
Grant type | GRF |
Status | Finished |
Effective start/end date | 1/01/10 → 27/03/13 |