Multi-Density Sketch-to-Image Translation Network

Jialu Huang, Jing Liao, Zhifeng Tan, Sam Kwong*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

13 Citations (Scopus)

Abstract

Sketch-to-image (S2I) translation plays an important role in image synthesis and manipulation tasks, such as photo editing and colorization. Some specific S2I translation including sketch-to-photo and sketch-to-painting can be used as powerful tools in the art design industry. However, previous methods only support S2I translation with a single level of density, which gives less flexibility to users for controlling the input sketches. In this work, we propose the first multi-level density sketch-to-image translation framework, which allows the input sketch to cover a wide range from rough object outlines to micro structures. Moreover, to tackle the problem of noncontinuous representation of multi-level density input sketches, we project the density level into a continuous latent space, which can then be linearly controlled by a parameter. This allows users to conveniently control the densities of input sketches and generation of images. Moreover, our method has been successfully verified on various datasets for different applications including face editing, multi-modal sketchto- photo translation, and anime colorization, providing coarse-tofine levels of controls to these applications.
Original languageEnglish
Pages (from-to)4002-4015
JournalIEEE Transactions on Multimedia
Volume24
Online published14 Sept 2021
DOIs
Publication statusPublished - 2022

Research Keywords

  • Codes
  • Decoding
  • Faces
  • Image edge detection
  • Image synthesis
  • Task analysis
  • Training

Fingerprint

Dive into the research topics of 'Multi-Density Sketch-to-Image Translation Network'. Together they form a unique fingerprint.

Cite this