Exploring Visual Interpretability for Contrastive Language-Image Pre-training

09/15/2022
by   Yi Li, et al.
18

Contrastive Language-Image pre-training (CLIP) learns rich representations via readily available supervisions of natural language. It could improve general performance on downstream vision tasks, including but not limited to zero-shot, long tail, segmentation, retrieval, caption and video. However, to the best of our knowledge, the visual interpretability of CLIP has not been studied yet. To provide visual explanations of its predictions, we propose the Image-Text Similarity Map (ITSM). Based on it, we surprisingly find that CLIP prefers the background regions than the foregrounds, and presenting erroneous visualization against human understanding. Experimentally, we find the devil is in the pooling part, where inappropriate pooling methods lead to a phenomenon called semantic shift. To correct and boost the visualization results, we propose the Masked Max Pooling, with attention map from the self-supervised image encoder. Meanwhile, interpretability task and recognition task require different representations. To address the problem, we propose the dual projections to cater this requirement. We integrate above methods as Interpretable Contrastive Language-Image pre-training (ICLIP). And experiments suggest ICLIP greatly improves the interpretability. For example, the nontrivial improvements are 32.85% and 49.10%, respectively, on VOC 2012 dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset