Temporal Attentive Alignment for Large-Scale Video Domain Adaptation

07/30/2019
by   Min-Hung Chen, et al.
0

Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose two largescale video DA datasets with much larger domain discrepancy: UCF-HMDB_full and Kinetics-Gameplay. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on four video DA datasets (e.g. 7.9 UCF", and 10.3 released at http://github.com/cmhungsteve/TA3N.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset