TY - GEN
T1 - Center-biased frame selection algorithms for fast multi-frame motion estimation IN H.264
AU - Ting, Chi-Wang
AU - Po, Lai-Man
AU - Cheung, Chun-Ho
PY - 2003
Y1 - 2003
N2 - The new upcoming video coding standard, H.264, allows motion estimation performing on multiple reference frames. This new feature improves the prediction accuracy of inter-coding blocks significantly, but it is extremely computational intensive. Its reference software adopts a full search scheme. The complexity of multi-frame motion estimation increases linearly with the number of used reference frames. However, the distortion gain given by each reference frame varies with the motion content of the video sequence, and it is not efficient to search through all the candidate frames. In this paper, a novel center-biased frame selection method is proposed to speed up the multi-frame motion estimation process in H.264. We apply a center-biased frame selection path to identify the ultimate reference frame from all the candidates. Simulation results show that our proposed method can save about 77% computations constantly while keeping similar picture quality as compared to full search. © 2003 IEEE.
AB - The new upcoming video coding standard, H.264, allows motion estimation performing on multiple reference frames. This new feature improves the prediction accuracy of inter-coding blocks significantly, but it is extremely computational intensive. Its reference software adopts a full search scheme. The complexity of multi-frame motion estimation increases linearly with the number of used reference frames. However, the distortion gain given by each reference frame varies with the motion content of the video sequence, and it is not efficient to search through all the candidate frames. In this paper, a novel center-biased frame selection method is proposed to speed up the multi-frame motion estimation process in H.264. We apply a center-biased frame selection path to identify the ultimate reference frame from all the candidates. Simulation results show that our proposed method can save about 77% computations constantly while keeping similar picture quality as compared to full search. © 2003 IEEE.
U2 - 10.1109/ICNNSP.2003.1281099
DO - 10.1109/ICNNSP.2003.1281099
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 0780377028
SN - 9780780377028
VL - 2
SP - 1258
EP - 1261
BT - Proceedings of 2003 International Conference on Neural Networks and Signal Processing, ICNNSP'03
T2 - 2003 International Conference on Neural Networks and Signal Processing, ICNNSP'03
Y2 - 14 December 2003 through 17 December 2003
ER -