Sketch2Human : Deep Human Generation with Disentangled Geometry and Appearance Constraints
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Number of pages | 14 |
Journal / Publication | IEEE Transactions on Visualization and Computer Graphics |
Publication status | Online published - 23 May 2024 |
Link(s)
Abstract
Geometry- and appearance-controlled full-body human image generation is an interesting but challenging task. Existing solutions are either unconditional or dependent on coarse conditions (e.g., pose, text), thus lacking explicit geometry and appearance control of body and garment. Sketching offers such editing ability and has been adopted in various sketch-based face generation and editing solutions. However, directly adapting sketch-based face generation to full-body generation often fails to produce high-fidelity and diverse results due to the high complexity and diversity in the pose, body shape, and garment shape and texture. Recent geometrically controllable diffusion-based methods mainly rely on prompts to generate appearance. It is hard to balance the realism and the faithfulness of their results to the sketch when the input is coarse. This work presents Sketch2Human, the first system for controllable full-body human image generation guided by a semantic sketch (for geometry control) and a reference image (for appearance control). Our solution is based on the latent space of StyleGANHuman with inverted geometry and appearance latent codes as input. Specifically, we present a sketch encoder trained with a large synthetic dataset sampled from StyleGAN-Human’s latent space and directly supervised by sketches rather than real images. Considering the entangled information of partial geometry and texture in StyleGAN-Human and the absence of disentangled datasets, we design a novel training scheme that creates geometry-preserved and appearance-transferred training data to tune a generator to achieve disentangled geometry and appearance control. Although our method is trained with synthetic data, it can also handle hand-drawn sketches. Qualitative and quantitative evaluations demonstrate the superior performance of our method to state-of-the-art methods. We will release the code upon the acceptance of the paper. © 2024 IEEE.
Research Area(s)
- Full-body image generation, style-based generator, style mixing, sketch-based generation
Citation Format(s)
Sketch2Human: Deep Human Generation with Disentangled Geometry and Appearance Constraints. / Qu, Linzi; Shang, Jiaxiang; Ye, Hui et al.
In: IEEE Transactions on Visualization and Computer Graphics, 23.05.2024.
In: IEEE Transactions on Visualization and Computer Graphics, 23.05.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review