2D & 3D Portrait Editing with Differentiable Rendering
基於可微渲染的二維及三維人像編輯
Student thesis: Master's Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 21 Jul 2022 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(94e1e600-bb22-4d90-b5cc-0d9a2813a777).html |
---|---|
Other link(s) | Links |
Abstract
Portrait editing in both 2D and 3D remains challenging in preserving physical and biological fidelity and guaranteeing convenience for processing. This thesis presents two portrait editing tasks in 2D and 3D, respectively, showing the differentiable rendering process from meshes to images can help tackling both challenges. For the 2D task, a novel two-stage framework for portrait lighting enhancement based on 3D guidance is present. While existing image lighting enhancement methods fail to handle the delicate geometry of human faces, the present framework bridges prior knowledge of face geometry and lighting model to the 2D image translation process by differentiable rendering to achieve more realistic editing. In the first stage, the geometry and lighting information of the input is estimated and the corresponding optimized lighting is automatically predicted to render a guidance image. In the second stage, an image-to-image translation network with a novel transformer architecture captures the long-range correlations between the input and the guidance and produces the lighting-enhanced result. For the 3D task, the first framework for 3D portrait style transfer is present, which generates 3D face models with both the geometry exaggerated and the texture stylized while preserving the identity from the original content. It requires only one arbitrary style image instead of a large set of style examples, provides parameterized and disentangled geometry and texture outputs, and enables further graphics applications. The framework also consists of two stages. The first geometric style transfer stage uses facial landmark translation to capture the coarse geometry style and guide the deformation of the dense 3D face geometry. The second texture style transfer stage focuses on performing style transfer on the canonical texture by adopting a differentiable renderer to optimize the texture in a multi-view framework. Numerical comparisons on available criteria and user studies are conducted for both 2D and 3D tasks with corresponding state-of-the-art methods. Experiments show that the proposed methods can achieve robustly good results and outperform existing methods without any special demands on user inputs, which demonstrate the superiority of the framework designs as well as the advantage of differentiable rendering.