Self Paced Adversarial Training for Multimodal Few-shot Learning

11/22/2018
by   Frederik Pahde, et al.
0

State-of-the-art deep learning algorithms yield remarkable results in many visual recognition tasks. However, they still fail to provide satisfactory results in scarce data regimes. To a certain extent this lack of data can be compensated by multimodal information. Missing information in one modality of a single data point (e.g. an image) can be made up for in another modality (e.g. a textual description). Therefore, we design a few-shot learning task that is multimodal during training (i.e. image and text) and single-modal during test time (i.e. image). In this regard, we propose a self-paced class-discriminative generative adversarial network incorporating multimodality in the context of few-shot learning. The proposed approach builds upon the idea of cross-modal data generation in order to alleviate the data sparsity problem. We improve few-shot learning accuracies on the finegrained CUB and Oxford-102 datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset