Mannequin2Real: A Two-Stage Generation Framework for Transforming Mannequin Images Into Photorealistic Model Images for Clothing Display

Haijun Zhang, Xiangyu Mu, Guojian Li, Zhenhao Xu, Xinrui Yu, Jianghong Ma*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

7 Citations (Scopus)

Abstract

The rapid development of e-commerce has significantly influenced consumer behavior, and online clothing purchases have been increasing. To effectively showcase clothing items to consumers, merchants often require high-quality fashion display images, which can be acquired by hiring human models for photography for a high cost. Leveraging the power of generative models, this study develops an automated generation framework called Mannequin2Real to translate mannequin images into photorealistic model images for fashion display purposes. The designed framework comprises two stages: model head generation and skin generation. In the head generation stage, the relevant features of the model head regions are first extracted and used as inputs to the head generation network, which is responsible for synthesizing a photorealistic head image. Subsequently, in the skin generation stage, the skin mask and pose features of a model body image are extracted and fed into the skin generation network, accomplishing the generation of photorealistic skin. Finally, the synthesized head region and skin region are combined to produce a photorealistic model image. To examine the effectiveness of our developed Mannequin2Real model, we first evaluated it on a high-resolution virtual try-on dataset. In addition, we constructed a dataset of images of mannequins captured in real-world scenarios. The experimental results demonstrate the effectiveness of our approach compared to other image generation algorithms. © 2024 IEEE.
Original languageEnglish
Pages (from-to)2773-2783
JournalIEEE Transactions on Consumer Electronics
Volume70
Issue number1
Online published20 Feb 2024
DOIs
Publication statusPublished - Feb 2024

Funding

This work was supported in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2021B1515020088, and in part by the National Natural Science Foundation of China under Grant 62202122 and Grant 62073272.

Research Keywords

  • clothing display
  • generative adversarial networks
  • Image synthesis
  • mannequin image

Fingerprint

Dive into the research topics of 'Mannequin2Real: A Two-Stage Generation Framework for Transforming Mannequin Images Into Photorealistic Model Images for Clothing Display'. Together they form a unique fingerprint.

Cite this