ArXiv Preprint
Recent advances in self-supervised learning (SSL) using large models to learn
visual representations from natural images are rapidly closing the gap between
the results produced by fully supervised learning and those produced by SSL on
downstream vision tasks. Inspired by this advancement and primarily motivated
by the emergence of tabular and structured document image applications, we
investigate which self-supervised pretraining objectives, architectures, and
fine-tuning strategies are most effective. To address these questions, we
introduce RegCLR, a new self-supervised framework that combines contrastive and
regularized methods and is compatible with the standard Vision Transformer
architecture. Then, RegCLR is instantiated by integrating masked autoencoders
as a representative example of a contrastive method and enhanced Barlow Twins
as a representative example of a regularized method with configurable input
image augmentations in both branches. Several real-world table recognition
scenarios (e.g., extracting tables from document images), ranging from standard
Word and Latex documents to even more challenging electronic health records
(EHR) computer screen images, have been shown to benefit greatly from the
representations learned from this new framework, with detection
average-precision (AP) improving relatively by 4.8% for Table, 11.8% for
Column, and 11.1% for GUI objects over a previous fully supervised baseline on
real-world EHR screen images.
Weiyao Wang, Byung-Hak Kim, Varun Ganapathi
2022-11-02