Adversarially Smoothed Feature Alignment for Visual Domain Adaptation

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publication2021 International Joint Conference on Neural Networks (IJCNN) Proceedings
PublisherInstitute of Electrical and Electronics Engineers
ISBN (electronic)978-1-6654-3900-8, 978-0-7381-3366-9
ISBN (print)978-1-6654-4597-9
Publication statusPublished - 2021

Publication series

NameProceedings of the International Joint Conference on Neural Networks
ISSN (Print)2161-4393
ISSN (electronic)2161-4407

Conference

Title2021 International Joint Conference on Neural Networks (IJCNN 2021)
LocationVirtual
Period18 - 22 July 2021

Abstract

Recent approaches to unsupervised domain adaptation focus on transferring knowledge from source (labeled) data to target (unlabeled) data. Both data types share the same class space but originate from different domains. One way to achieve this transfer is tasking two classifiers to detect target features that diverge from source features. Meanwhile, a feature generator is adversarially refined to match the detected features with the source ones. However, the aligned features are still subject to ambiguity. Samples that are not smoothly distributed on the latent manifold are often missed in training. Moreover, target data may not be sufficient for adversarial learning. To overcome these problems, our proposed Adversarially Smoothed Feature Alignment (AdvSFA) model is designed to identify ambiguous target inputs by maximizing classifiers discrepancy in an extended class space. This enables the generator to receive valuable feedback from the classifiers and consequently learn more discriminative and smooth representation. Imposing smoothness on the latent manifold is a desirable property to improve model generalization and avoid having neighboring samples of different classes. To further promote such property, we not only task the generator to conduct feature alignment on target examples and but also in-between them. By adopting these constraints, our method shows a remarkable improvement across different adaptation tasks using two benchmark datasets.

Citation Format(s)

Adversarially Smoothed Feature Alignment for Visual Domain Adaptation. / Azzam, Mohamed; Wu, Si; Zhang, Yang et al.
2021 International Joint Conference on Neural Networks (IJCNN) Proceedings. Institute of Electrical and Electronics Engineers, 2021. (Proceedings of the International Joint Conference on Neural Networks).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review