A weakly supervised adaptive triplet loss for deep metric learning

09/27/2019
by   Xiaonan Zhao, et al.
0

We address the problem of distance metric learning in visual similarity search, defined as learning an image embedding model which projects images into Euclidean space where semantically and visually similar images are closer and dissimilar images are further from one another. We present a weakly supervised adaptive triplet loss (ATL) capable of capturing fine-grained semantic similarity that encourages the learned image embedding models to generalize well on cross-domain data. The method uses weakly labeled product description data to implicitly determine fine grained semantic classes, avoiding the need to annotate large amounts of training data. We evaluate on the Amazon fashion retrieval benchmark and DeepFashion in-shop retrieval data. The method boosts the performance of triplet loss baseline by 10.6 out-performs the state-of-art model on all evaluation metrics.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset