Computational Complexity Optimization for High Efficiency Video Coding


Student thesis: Doctoral Thesis

View graph of relations


Related Research Unit(s)


Awarding Institution
Award date20 Aug 2018


Due to the development of the hardware and internet technologies, video data becomes ubiquitous in our daily lives. At the same time, the High Definition (HD) and Ultra High Definition (UHD) videos get increasingly popular for their better visual experience. These changes result in the dramatic growth of the video data volume, which continually challenges the existing video transmission and storage systems. In such a situation, high expectation has been set on video coding. Video coding is a kind of data compression technology, which removes the redundant information in video data and reduces the data volume. The coded video data is supposed to be restored with limited information loss or even without information loss.

The video coding standard specifies a set of recognized video coding tools. By following the same standard, the coded video data can be correctly decoded by any user. The development of the video coding standard has lasted for more than 30 years. By introducing more coding patterns to be assessed and a more flexible coding structure, the latest video coding standard High Efficiency Video Coding (HEVC) significantly improves the coding performance compared to H.264. However, these new coding techniques dramatically increase the coding complexity of HEVC at the same time. High coding complexity requires much computing resource. Moreover, it hinders the promotion and popularization of HEVC in real-time and power-limited video coding applications. The research on computational complexity optimization has been started since the standardization of H.264. Generally speaking, two branches can be concluded for this research area. The first one is fast decision, which reduces the coding complexity by early excluding some alternative coding patterns to be assessed. It tries to keep the coding performance intact, and the amount of the complexity reduction is usually limited and content-related. The second branch, complexity control, aims to make tradeoff between coding complexity and coding performance efficiently. Unlike fast decision, complexity control strives to reach the target complexity while minimizing the performance degradation.

In this thesis, both fast decision and complexity control for HEVC have been investigated to optimize the coding complexity. For fast decision, a two-stage fast Coding Unit (CU) decision method is proposed for HEVC Inter coding. Three methods are proposed for complexity control. First, two complexity control methods are proposed for HEVC Inter coding. Then, a Coding-Tree-Unit-level (CTU-level) complexity control method is proposed for HEVC Intra coding.

In the fast CU decision method, the Rate-Distortion (RD) cost of the merge mode is adopted as the feature in the first stage. Based on this feature, the CUs are divided into three classes by two thresholds, which are derived based on the Bayesian method. Then, two operations, namely early CU pruning and early CU skipping, are applied to two of the classes. As to the CUs in the remaining class, the RD cost of the merge mode is not effective to make early decisions. Hence, in the second stage, these CUs will be encoded with all the available modes, and the final RD cost can be obtained. Finally, by the probabilistic modeling, both the final RD cost and the coding information of the neighboring CUs are considered to make decisions for these CUs.

The two complexity control schemes for HEVC Inter coding have three main steps, namely complexity estimation, complexity allocation and coding process simplification. In complexity estimation, a logarithmic model is proposed to estimate the coding complexity of a CTU. The two schemes have different complexity allocation and coding process simplification steps. In the first scheme, the complexity budget is allocated to each CTU according to its estimated coding complexity. Then, based on the allocated coding complexity, the CTU depth range is adjusted for the CTU. To optimize the RD performance, the prediction performance of a CTU is considered to derive the appropriate CTU depth range. In the second scheme, the complexity budget is first allocated to each CTU. Then, the allocated budget for each CTU is further allocated to each CTU depth. All the prediction modes are organized into four groups, and these groups are sorted according to their occurrence frequencies. According to the allocated complexity of each CTU depth, only some of the mode groups will be assessed. A feedback-based error elimination scheme is employed to eliminate the complexity error in time for both methods.

Because there is no Inter prediction in all Intra coding, the existing complexity control methods for HEVC Inter coding can not be applied to HEVC all Intra coding. Hence, a CTU-level complexity control method for HEVC Intra coding is proposed. A linear model is first proposed to estimate the coding complexity of each CTU based on its Sum of Absolute Transformed Difference (SATD). Then, some PU sizes will be excluded based on the allocated complexity of the CTU. The selection of PU sizes takes the PU size preference of a CTU into consideration to reduce the performance degradation.

The proposed methods will be helpful for the promotion and popularization of HEVC in various video applications. The fast CU decision method is useful in video storage and video transcoding. Because complexity control puts more emphasis on the complexity reduction, the proposed complexity control methods are more suitable for real-time video coding and power-limited video coding.