Focus Longer to See Better:Recursively Refined Attention for Fine-Grained Image Classification

05/22/2020
by   Prateek Shroff, et al.
50

Deep Neural Network has shown great strides in the coarse-grained image classification task. It was in part due to its strong ability to extract discriminative feature representations from the images. However, the marginal visual difference between different classes in fine-grained images makes this very task harder. In this paper, we tried to focus on these marginal differences to extract more representative features. Similar to human vision, our network repetitively focuses on parts of images to spot small discriminative parts among the classes. Moreover, we show through interpretability techniques how our network focus changes from coarse to fine details. Through our experiments, we also show that a simple attention model can aggregate (weighted) these finer details to focus on the most dominant discriminative part of the image. Our network uses only image-level labels and does not need bounding box/part annotation information. Further, the simplicity of our network makes it an easy plug-n-play module. Apart from providing interpretability, our network boosts the performance (up to 2 to its baseline counterparts. Our codebase is available at https://github.com/TAMU-VITA/Focus-Longer-to-See-Better

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset