Contradistinguisher: Applying Vapnik's Philosophy to Unsupervised Domain Adaptation

05/25/2020
by   Sourabh Balgi, et al.
2

A complex combination of simultaneous supervised-unsupervised learning is believed to be the key to humans performing tasks seamlessly across multiple domains or tasks. This phenomenon of cross-domain learning has been very well studied in domain adaptation literature. Recent domain adaptation works rely on an indirect way of first aligning the source and target domain distributions and then train a classifier on the labeled source domain to classify the target domain. However, this approach has the main drawback that obtaining a near-perfect alignment of the domains in itself might be difficult or impossible (e.g., language domains). To address this, we follow Vapnik's idea of statistical learning that states any desired problem should be solved in the most direct way rather than solving a more general intermediate task and propose a direct approach to domain adaptation that does not require domain alignment. We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way and classify it in a supervised way on the source domain. We demonstrate the superiority of our approach by achieving state-of-the-art on eleven visual and four language benchmark datasets in both single-source and multi-source domain adaptation settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset