Adversarial Unsupervised Domain Adaptation with Conditional and Label Shift: Infer, Align and Iterate

07/28/2021
by   Xiaofeng Liu, et al.
0

In this work, we propose an adversarial unsupervised domain adaptation (UDA) approach with the inherent conditional and label shifts, in which we aim to align the distributions w.r.t. both p(x|y) and p(y). Since the label is inaccessible in the target domain, the conventional adversarial UDA assumes p(y) is invariant across domains, and relies on aligning p(x) as an alternative to the p(x|y) alignment. To address this, we provide a thorough theoretical and empirical analysis of the conventional adversarial UDA methods under both conditional and label shifts, and propose a novel and practical alternative optimization scheme for adversarial UDA. Specifically, we infer the marginal p(y) and align p(x|y) iteratively in the training, and precisely align the posterior p(y|x) in testing. Our experimental results demonstrate its effectiveness on both classification and segmentation UDA, and partial UDA.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset