TY - GEN
T1 - U-Shaped Transformer-Based 360-Degree No Reference Image Quality Assessment
AU - Wei, Xuekai
AU - Zhou, Mingliang
AU - Kwong, Sam
PY - 2022/12
Y1 - 2022/12
N2 - Thanks to creative rendering and display techniques, 360-degree images can provide a more immersive and interactive experience for streaming users. However, such features make the perceptual characteristics of 360-degree images more complex than those of fixed-view images, making it impossible to achieve a comprehensive and accurate image quality assessment (IQA) task using a simple stack of pre-processing, post-processing, compression, and rendering tasks. In order to thoroughly learn global and local features in 360-degree images, reduce the complexity of multichannel neural network models and simplify the training process, this paper proposes a user-Aware joint architecture and an efficient converter dedicated to 360-degree no-reference (NR) IQA. The input of the proposed method is a 360-degree cubic mapping projection (CMP) image. In addition, the proposed 360-degree NRIQA method includes a non-overlapping self-Attentive selection module based on a dominant map and a feature extraction module based on a U-shaped transformer (U-former) to address perceptual region significance and projection distortion. The transformer-based architecture and the weighted averaging technique are jointly used to predict local perceptual quality. Experimental results obtained on widely used databases show that the proposed model outperforms other state-of-The-Art methods in the case of NR 360-degree image quality assessment. In addition, cross-database evaluation and ablation studies demonstrate the intrinsic robustness and generalization of the proposed model. © 2022 IEEE.
AB - Thanks to creative rendering and display techniques, 360-degree images can provide a more immersive and interactive experience for streaming users. However, such features make the perceptual characteristics of 360-degree images more complex than those of fixed-view images, making it impossible to achieve a comprehensive and accurate image quality assessment (IQA) task using a simple stack of pre-processing, post-processing, compression, and rendering tasks. In order to thoroughly learn global and local features in 360-degree images, reduce the complexity of multichannel neural network models and simplify the training process, this paper proposes a user-Aware joint architecture and an efficient converter dedicated to 360-degree no-reference (NR) IQA. The input of the proposed method is a 360-degree cubic mapping projection (CMP) image. In addition, the proposed 360-degree NRIQA method includes a non-overlapping self-Attentive selection module based on a dominant map and a feature extraction module based on a U-shaped transformer (U-former) to address perceptual region significance and projection distortion. The transformer-based architecture and the weighted averaging technique are jointly used to predict local perceptual quality. Experimental results obtained on widely used databases show that the proposed model outperforms other state-of-The-Art methods in the case of NR 360-degree image quality assessment. In addition, cross-database evaluation and ablation studies demonstrate the intrinsic robustness and generalization of the proposed model. © 2022 IEEE.
KW - 360-degree image
KW - image quality assessment
KW - transformer
UR - http://www.scopus.com/inward/record.url?scp=85158830202&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85158830202&origin=recordpage
U2 - 10.1109/ICCICC57084.2022.10101609
DO - 10.1109/ICCICC57084.2022.10101609
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 978-1-6654-9085-6
T3 - Proceedings of IEEE International Conference on Cognitive Informatics and Cognitive Computing, ICCI*CC
SP - 229
EP - 233
BT - Proceedings of 2022 IEEE 21st International Conference on Cognitive Informatics and Cognitive Computing, ICCI*CC 2022
PB - IEEE
T2 - 21st IEEE International Conference on Cognitive Informatics and Cognitive Computing (ICCI*CC 2022)
Y2 - 8 December 2022 through 10 December 2022
ER -