Image Synthesis and Image-to-Image Translation based on Generative Adversarial Network

基於對抗神經網絡的圖像生成與圖樣轉換

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Awarding Institution
Supervisors/Advisors
Award date16 Aug 2021

Abstract

Image-to-Image (I2I) translation is an emerging topic in academia, and it also has been applied in real-world industry for tasks like image synthesis, super-resolution, and colorization. Traditional I2I translation methods usually train data in two or more domains together. This requires lots of computation resources. The results are of lower quality, and contain more artifacts. The training process could be unstable when the data in different domains are not balanced, and modal collapse is more likely to happen. In this work, we first summarize the current stage of I2I translation and synthesis based on generative adversarial network (GAN). And then we propose a series of methods that show measurable enhancement of I2I translation in terms of controllability, where our methods enable users to locally control the generated output. We also demonstrate that our methods can be applied in a wild range of applications including image editing, colonization, super resolution, etc. Later, we propose a new I2I translation method that generates a new model in the target domain via a series of model transformations on a pre-trained StyleGAN2 model in the source domain. After that, we develop an inversion method to achieve the conversion between an image and its latent vector. By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain. Both qualitative and quantitative evaluations were conducted to verify that the proposed method can achieve better performance in terms of controllability, image quality, diversity and semantic similarity to the input and reference images compared to state-of-the-art works.