Skip to main navigation Skip to search Skip to main content

Text-to-Vector Generation with Neural Path Representation

Peiying ZHANG, Nanxuan ZHAO, Jing LIAO*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Vector graphics are widely used in digital art and highly favored by designers due to their scalability and layer-wise properties. However, the process of creating and editing vector graphics requires creativity and design expertise, making it a time-consuming task. Recent advancements in text-to-vector (T2V) generation have aimed to make this process more accessible. However, existing T2V methods directly optimize control points of vector graphics paths, often resulting in intersecting or jagged paths due to the lack of geometry constraints. To overcome these limitations, we propose a novel neural path representation by designing a dual-branch Variational Autoencoder (VAE) that learns the path latent space from both sequence and image modalities. By optimizing the combination of neural paths, we can incorporate geometric constraints while preserving expressivity in generated SVGs. Furthermore, we introduce a two-stage path optimization method to improve the visual and topological quality of generated SVGs. In the first stage, a pre-trained text-to-image diffusion model guides the initial generation of complex vector graphics through the Variational Score Distillation (VSD) process. In the second stage, we refine the graphics using a layer-wise image vectorization strategy to achieve clearer elements and structure. We demonstrate the effectiveness of our method through extensive experiments and showcase various applications. The project page is https://intchous.github.io/T2V-NPR. © 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
Original languageEnglish
Article number36
JournalACM Transactions on Graphics
Volume43
Issue number4
Online published19 Jul 2024
DOIs
Publication statusPublished - Jul 2024

Bibliographical note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Funding

The work described in this paper was substantially supported by a GRF grant from the Research Grants Council (RGC) of the Hong Kong Special Administrative Region, China [Project No. CityU 11216122].

Research Keywords

  • diffusion model
  • SVG
  • text-guided generation
  • vector graphics

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Text-to-Vector Generation with Neural Path Representation'. Together they form a unique fingerprint.

Cite this