Learning Spatiotemporal Interactions for User-Generated Video Quality Assessment

Hanwei Zhu, Baoliang Chen, Lingyu Zhu, Shiqi Wang*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

29 Citations (Scopus)

Abstract

Distortions from spatial and temporal domains have been identified as the dominant factors that govern the visual quality. Though both have been studied independently in deep learning-based user-generated content (UGC) video quality assessment (VQA) by frame-wise distortion estimation and temporal quality aggregation, much less work has been dedicated to the integration of them with deep representations. In this paper, we propose a SpatioTemporal Interactive VQA (STI-VQA) model based upon the philosophy that video distortion can be inferred from the integration of both spatial characteristics and temporal motion, along with the flow of time. In particular, for each timestamp, both the spatial distortion explored by the feature statistics and local motion captured by feature difference are extracted and fed to a transformer network for the motion aware interaction learning. Meanwhile, the information flow of spatial distortion from the shallow layer to the deep layer is constructed adaptively during the temporal aggregation. The transformer network enjoys an advanced advantage for long-range dependencies modeling, leading to superior performance on UGC videos. Experimental results on five UGC video benchmarks demonstrate the effectiveness and efficiency of our STI-VQA model, and the source code will be available online at https://github.com/h4nwei/STI-VQA.
Original languageEnglish
Pages (from-to)1031-1042
Number of pages12
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume33
Issue number3
Online published21 Sept 2022
DOIs
Publication statusPublished - Mar 2023

Funding

This work was supported in part by the National Natural Science Foundation of China under 62022002; in part by the Shenzhen Virtural University Park, The Science Technology and Innovation Committee of Shenzhen Municipality, under Project 2021Szvup128; and in part by the Hong Kong Research Grants Council General Research Fund (GRF) under Grant 11203220.

Research Keywords

  • Distortion
  • Feature extraction
  • No-reference video quality assessment
  • Quality assessment
  • Spatiotemporal phenomena
  • Three-dimensional displays
  • Transformers
  • user-generated content
  • Video recording
  • vision transformer

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Learning Spatiotemporal Interactions for User-Generated Video Quality Assessment'. Together they form a unique fingerprint.

Cite this