Perturbation-insensitive cross-domain image enhancement for low-quality face verification

Qianfen Jiao, Jian Zhong, Cheng Liu, Si Wu, Hau-San Wong*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

8 Citations (Scopus)

Abstract

Low-quality face images present two typical problems: perceptual distortion and a significant reduction in verification accuracy. Prevailing works address these two problems by face image enhancement. Generally, they require low- and high-quality image pairs or facial priors, e.g., facial landmarks, which are relatively rare in real-world scenarios. In this paper, we introduce a fresh perspective to address these problems, by hypothesizing that there is a positive correlation between cross-domain image enhancement and adversarial defense. With the incorporation of cross-domain image enhancement, paired images or facial priors are no longer necessary. Furthermore, we introduce two types of adversarial perturbations, namely appearance and semantic perturbations, and defending these perturbations aims to solve the aforementioned two typical problems. Defending appearance perturbations decreases perceptual distortion and improve image quality, while defending semantic perturbations promotes identity preservation during the image enhancement process, which improves verification accuracy. To this end, we propose a collaborative face enhancement module (COFEM) for face verification based on two types of adversarial perturbation examples. COFEM incorporates three components. First, an adversarial example generator attacks high-quality images (source domain) in two different ways to obtain appearance and semantic perturbation examples. Next, an image enhancement network denoises these perturbation examples and enhances the quality of low-quality images (target domain). Then, an image reconstruction network is utilized to preserve the identity of the enhanced image such that it is consistent with that of the corresponding input. Unlike prevailing image enhancement models which mainly focus on high perceptual quality, COFEM emphasizes identity-related feature preservation, which is vital to face verification. Combined with COFEM, we also design a face verification module to form a low-quality face verification approach. Extensive experiments demonstrate the effectiveness of our approach in improving the low-quality face image quality and verification accuracy.
Original languageEnglish
Pages (from-to)1183-1201
JournalInformation Sciences
Volume608
Online published7 Jul 2022
DOIs
Publication statusPublished - Aug 2022

Funding

The research of this paper has been supported in part by the Research Grants Council of the Hong Kong Special Administration Region (Project No. CityU 11201220), in part by City University of Hong Kong (Project No. 7005675), in part by the National Natural Science Foundation of China (Project No. 62072189) and in part by the Natural Science Foundation of Guangdong Province (Project No. 2020A1515010484, 2022A1515011160).

Research Keywords

  • Low-quality face verification
  • Image enhancement
  • Domain adaptation
  • Generative adversarial net

Fingerprint

Dive into the research topics of 'Perturbation-insensitive cross-domain image enhancement for low-quality face verification'. Together they form a unique fingerprint.

Cite this