Semi-supervised representation learning via dual autoencoders for domain adaptation

08/04/2019
by   Shuai Yang, et al.
12

Domain adaptation which pays attention to exploiting the knowledge in source domain to promote the learning tasks in target domain plays a critical role in real-world applications. Recently, lots of deep learning approaches based on autoencoders have achieved significance performance in domain adaptation. However, most existing methods focus on minimizing the distribution divergence by putting the source data and target data together to learn global feature representations, while do not take the local relationship between instances of the same category in different domains into account. To address this problem, we propose a novel Semi-Supervised Representation Learning framework via Dual Autoencoders for domain adaptation, named SSRLDA. More specifically, redwe extract richer feature representations by learning the global and local feature representations simultaneously using two novel autoencoders, which are referred to as marginalized denoising autoencoder with adaptation distribution (MDA_ad) and multi-class marginalized denoising autoencoder (MMDA) respectively. Meanwhile, we redadopt an iterative strategy to make full use of label information to optimize feature representations. Experimental results show that our proposed approach outperforms several state-of-the-art baseline methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset