Data Driven Face Image Editing

基於數據驅動的人臉圖像編輯

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Awarding Institution
Supervisors/Advisors
Award date19 Mar 2018

Abstract

Image editing is an emerging field which is the convergence of computer vision, image processing and computer graphics. The target of image editing is to process an ordinary image for achieving desired results. From the perspective of image sensing and aesthetics, there are a series of applications ranging from image super resolution and image style transfer. Among the given images, face images receive great attention due to the sensitivity of human eyes. In this thesis, we propose a data-driven framework for face image editing including face hallucination, face style transfer and face sketch synthesis.

Face hallucination aims to generate high resolution face images from low resolution inputs. Different from generic image super resolution methods, face hallucination exploits specific facial structures and textures. It produces high quality face images compared with generic methods. We propose a two-stage method where we generate facial components of the input image using CNNs at first. Then, we synthesize fine-grained facial structures from the training data and add them back to the facial components. Therefore, we generate facial components to approximate ground truth global appearance and enhance them through detail recovery. It leads to superior performance on the benchmark datasets.

Face style transfer aims to transfer the style of a headshot photo to face images. We propose an algorithm to stylize face images using multiple exemplars containing different subjects in the same style. Patch correspondence between an input image and exemplars are established using a Markov Random Field (MRF), which enables accurate local energy transfer via Laplacian stacks. The artifacts brought by multiple exemplars around the facial component boundary are suppressed by an edge-preserving filter. We show our algorithm consistently produce visually pleasing results through the experiments.

Face sketch synthesis aims to generate a stylistic face sketches according to an input image. Existing data-driven approaches meet the challenging problem that input image are captured in different lighting conditions from training ones. The critical step causing the failure is the search of similar patch candidates for an input image patch. We propose a fast preprocessing method to interactively adjust the lighting of training and input photos. Our method can be directly integrated into existing data-driven approaches to improve their robustness with ignorable computational cost.

    Research areas

  • Image processing, Computer vision, Computational photography