Self-Supervised Representation Learning on Document Images

04/18/2020
by   Adrian Cosma, et al.
0

This work analyses the impact of self-supervised pre-training on document images. While previous approaches explore the effect of self-supervision on natural images, we show that patch-based pre-training performs poorly on text document images because of their different structural properties and poor intra-sample semantic information. We propose two context-aware alternatives to improve performance. We also propose a novel method for self-supervision, which makes use of the inherent multi-modality of documents (image and text), which performs better than other popular self-supervised methods, including supervised ImageNet pre-training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset