TY - GEN
T1 - LEARNING TO BLINDLY ASSESS IMAGE QUALITY IN THE LABORATORY AND WILD
AU - Zhang, Weixia
AU - Ma, Kede
AU - Zhai, Guangtao
AU - Yang, Xiaokang
PY - 2020/10
Y1 - 2020/10
N2 - Computational models for blind image quality assessment (BIQA) are typically trained in well-controlled laboratory environments with limited generalizability to realistically distorted images. Similarly, BIQA models optimized for images captured in the wild cannot adequately handle synthetically distorted images. To face the cross-distortion-scenario challenge, we develop a BIQA model and an approach of training it on multiple IQA databases (of different distortion scenarios) simultaneously. A key step in our approach is to create and combine image pairs within individual databases as the training set, which effectively bypasses the issue of perceptual scale realignment. We compute a continuous quality annotation for each pair from the corresponding human opinions, indicating the probability of one image having better perceptual quality. We train a deep neural network for BIQA over the training set of massive image pairs by minimizing the fidelity loss. Experiments on six IQA databases demonstrate that the optimized model by the proposed training strategy is effective in blindly assessing image quality in the laboratory and wild, outperforming previous BIQA methods by a large margin.
AB - Computational models for blind image quality assessment (BIQA) are typically trained in well-controlled laboratory environments with limited generalizability to realistically distorted images. Similarly, BIQA models optimized for images captured in the wild cannot adequately handle synthetically distorted images. To face the cross-distortion-scenario challenge, we develop a BIQA model and an approach of training it on multiple IQA databases (of different distortion scenarios) simultaneously. A key step in our approach is to create and combine image pairs within individual databases as the training set, which effectively bypasses the issue of perceptual scale realignment. We compute a continuous quality annotation for each pair from the corresponding human opinions, indicating the probability of one image having better perceptual quality. We train a deep neural network for BIQA over the training set of massive image pairs by minimizing the fidelity loss. Experiments on six IQA databases demonstrate that the optimized model by the proposed training strategy is effective in blindly assessing image quality in the laboratory and wild, outperforming previous BIQA methods by a large margin.
KW - Blind image quality assessment
KW - database combination
KW - deep neural networks
KW - fidelity loss.
UR - http://www.scopus.com/inward/record.url?scp=85095363616&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85095363616&origin=recordpage
U2 - 10.1109/ICIP40778.2020.9191278
DO - 10.1109/ICIP40778.2020.9191278
M3 - RGC 32 - Refereed conference paper (with host publication)
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 111
EP - 115
BT - 2020 IEEE International Conference on Image Processing - Proceedings
PB - IEEE
T2 - 2020 IEEE International Conference on Image Processing, ICIP 2020
Y2 - 25 September 2020 through 28 September 2020
ER -