Reinforcement learning based coding unit early termination algorithm for high efficiency video coding

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journal

6 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)276-286
Journal / PublicationJournal of Visual Communication and Image Representation
Volume60
Online published20 Feb 2019
Publication statusPublished - Apr 2019

Abstract

In this paper, we propose a Reinforcement Learning (RL) based Coding Unit (CU) early termination algorithm for High Efficiency Video Coding (HEVC). RL is utilized to learn a CU early termination classifier independent of depths for low complexity video coding. Firstly, we model the process of CU decision as a Markov Decision Process (MDP) according to the Markov property of CU decision. Secondly, based on the MDP, a CU early termination classifier independent of depths is learned from trajectories of CU decision across different depths with the end-to-end actor-critic RL algorithm. Finally, a CU decision early termination algorithm is introduced with the learned classifier, so as to reduce computational complexity of CU decision. We implement the proposed scheme with different neural network structures. Two different neural network structures are utilized in the implementation of RL based video encoder, which are evaluated to reduce video coding complexity by 34.34% and 43.33%. With regard to Bjøntegaard delta peak signal-to-noise ratio and Bjøntegaard delta bit rate, the results are −0.033 dB and 0.85%, −0.099 dB and 2.56% respectively on average under low delay B main configuration, when compared with the HEVC test model version 16.5.

Research Area(s)

  • Actor-critic, Coding tree unit, Early termination, High efficiency video coding, Markov decision processing, Reinforcement learning

Citation Format(s)