KTransGAN : Variational Inference-Based Knowledge Transfer for Unsupervised Conditional Generative Learning

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)3318-3331
Journal / PublicationIEEE Transactions on Multimedia
Volume23
Online published14 Sep 2020
Publication statusPublished - 2021

Abstract

Class-conditional generative models have gained popularity due to their characteristics of learning disentangled representations. However, these models typically require labeled examples in training. In this paper, we explore the feasibility of training these models on completely unlabeled data, under the assumption that we have access to other labeled data. The labeled data share the same label space, while their domain is shifted. Our model, which we refer to as KTransGAN, incorporates a classifier to transfer knowledge from the labeled data and performs collaborative learning with the conditional generator. By adopting these measures, KTransGAN is able to approximate the conditional distribution of the unlabeled data and simultaneously introduces a new solution to the unsupervised domain adaptation problem. To mitigate the training difficulty of our generative adversarial networks-based model, variational encoding and feature matching are also considered. From the empirical results, KTransGAN exhibits outstanding performance on a number of synthetic datasets and multiple real-world benchmarks. The quality of the synthesized instances is far superior to the pure variational autoencoding model. For example, on the CIFAR-10 dataset, our model scores 35.3 in FID, while the other model scores 128.45. In addition, the synthesis quality is close to the case when the model is trained in a fully supervised setting over the same number of training iterations. Regarding the classification performance, for instance, our model surpasses the highest state-of-the-art results (89.19%) by a large margin and achieves a test accuracy of 95.31% on the unlabeled data SVHN, while MNIST represents the labeled data. These results highlight the effectiveness of our proposed framework.

Research Area(s)

  • Adaptation models, Data models, domain adaptation, Gallium nitride, Generative adversarial networks, generative learning, Generators, image classification, knowledge transfer, Task analysis, Training