Few-shot Learning with Weakly-supervised Object Localization

03/02/2020
by   Jinfu Lin, et al.
0

Few-shot learning (FSL) aims to learn novel visual categories from very few samples, which is a challenging problem in real-world applications. Many data generation methods have improved the performance of FSL models, but require lots of annotated images to train a specialized network (e.g., GAN) dedicated to hallucinate new samples. We argue that localization is a more efficient approach because it provides the most discriminative regions without using extra samples. In this paper, we propose a novel method to address the FSL task by achieving weakly-supervised object localization within performing few-shot classification. To this end, we design (i) a triplet-input module to obtain the initial object seeds and (ii) an Image-To-Class-Distance (ITCD) based localizer to activate the deep descriptors of the key objects, thus obtaining the more discriminative representations used to perform few-shot classification. Extensive experiments show our method outperforms the state-of-the-art methods on benchmark datasets under various settings. Besides, our method achieves superior performance over previous methods when training the model on miniImageNet and evaluating it on the different datasets (e.g., Stanford Dogs), demonstrating its superior generalization capacity. Extra visualization shows the proposed method can localize the key objects accurately.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset