TY - GEN
T1 - Cross-domain Semantic Feature Learning via Adversarial Adaptation Networks
AU - Li, Rui
AU - Cao, Wenming
AU - Qian, Sheng
AU - Wong, Hau-San
AU - Wu, Si
PY - 2018
Y1 - 2018
N2 - Existing domain adaptation approaches generalize models trained on the labeled source domain data to the unlabeled target domain data by forcing feature distributions of two domains closer. However, these approaches are likely to ignore the semantic information during the feature alignment between source and target domain. In this paper, we propose a new unsupervised domain adaptation framework to learn the cross-domain features and disentangle the semantic information concurrently. Specifically, we firstly combine the task-specific classification and domain adversarial learning to obtain the cross-domain features by mapping the data of both domains with the shared feature extractor. Secondly, we integrate the domain adversarial learning and the within-domain reconstruction to disentangle the semantic information from the domain information. Thirdly, we include a cross-domain transformation to further refine the feature extractor, which in turn improves the performances of the task classifier. We compare our proposed model to previous state-of-the-art methods on domain adaptation digit classification tasks. Experimental results show that our model achieves better performances than the other counterparts, which demonstrates the superiority and effectiveness of our model.
AB - Existing domain adaptation approaches generalize models trained on the labeled source domain data to the unlabeled target domain data by forcing feature distributions of two domains closer. However, these approaches are likely to ignore the semantic information during the feature alignment between source and target domain. In this paper, we propose a new unsupervised domain adaptation framework to learn the cross-domain features and disentangle the semantic information concurrently. Specifically, we firstly combine the task-specific classification and domain adversarial learning to obtain the cross-domain features by mapping the data of both domains with the shared feature extractor. Secondly, we integrate the domain adversarial learning and the within-domain reconstruction to disentangle the semantic information from the domain information. Thirdly, we include a cross-domain transformation to further refine the feature extractor, which in turn improves the performances of the task classifier. We compare our proposed model to previous state-of-the-art methods on domain adaptation digit classification tasks. Experimental results show that our model achieves better performances than the other counterparts, which demonstrates the superiority and effectiveness of our model.
KW - Unsupervised domain adaptation
KW - deep neural network
KW - adversarial learning
UR - http://www.scopus.com/inward/record.url?scp=85059741881&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85059741881&origin=recordpage
U2 - 10.1109/ICPR.2018.8545300
DO - 10.1109/ICPR.2018.8545300
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 9781538637890
T3 - Proceedings - International Conference on Pattern Recognition
SP - 37
EP - 42
BT - 2018 24th International Conference on Pattern Recognition (ICPR)
PB - IEEE
T2 - 24th International Conference on Pattern Recognition (ICPR 2018)
Y2 - 20 August 2018 through 24 August 2018
ER -