Proc. SPIE 12039, Medical Imaging 2022: Digital and Computational
Pathology, 120391O (4 April 2022)
Whole Slide Image (WSI) analysis is a powerful method to facilitate the
diagnosis of cancer in tissue samples. Automating this diagnosis poses various
issues, most notably caused by the immense image resolution and limited
annotations. WSIs commonly exhibit resolutions of 100Kx100K pixels. Annotating
cancerous areas in WSIs on the pixel level is prohibitively labor-intensive and
requires a high level of expert knowledge. Multiple instance learning (MIL)
alleviates the need for expensive pixel-level annotations. In MIL, learning is
performed on slide-level labels, in which a pathologist provides information
about whether a slide includes cancerous tissue. Here, we propose Self-ViT-MIL,
a novel approach for classifying and localizing cancerous areas based on
slide-level annotations, eliminating the need for pixel-wise annotated training
data. Self-ViT- MIL is pre-trained in a self-supervised setting to learn rich
feature representation without relying on any labels. The recent Vision
Transformer (ViT) architecture builds the feature extractor of Self-ViT-MIL.
For localizing cancerous regions, a MIL aggregator with global attention is
utilized. To the best of our knowledge, Self-ViT- MIL is the first approach to
introduce self-supervised ViTs in MIL-based WSI analysis tasks. We showcase the
effectiveness of our approach on the common Camelyon16 dataset. Self-ViT-MIL
surpasses existing state-of-the-art MIL-based approaches in terms of accuracy
and area under the curve (AUC).
Ahmet Gokberk Gul, Oezdemir Cetin, Christoph Reich, Tim Prangemeier, Nadine Flinner, Heinz Koeppl
2022-10-17