ArXiv Preprint
In this paper, we consider the problem of enhancing self-supervised
visual-language pre-training (VLP) with medical-specific knowledge, by
exploiting the paired image-text reports from the radiological daily practice.
In particular, we make the following contributions: First, unlike existing
works that directly process the raw reports, we adopt a novel report filter to
extract the medical entities, avoiding unnecessary complexity from language
grammar and enhancing the supervision signals; Second, we propose a novel
entity embedding module by querying an external knowledge description base, to
exploit the rich context of additional information that the medical domain
affords, and implicitly build relationships between entities in the language
embedding space; Third, we propose a novel Transformer-based fusion model for
spatially aligning the entity description with visual signals at the image
patch level only with self-supervised learning, thus enabling the ability for
spatial grounding; Fourth, we conduct thorough experiments to validate the
effectiveness of our proposed architecture, and benchmark on numerous public
benchmarks e.g., ChestX-ray14, RSNA Pneumonia, SIIM-ACR Pneumothorax, COVIDx
CXR-2, COVID Rural, and EdemaSeverity. In both zero-shot and fine-tuning
settings, our model has demonstrated strong performance compared with the
former methods on disease classification and grounding.
Chaoyi Wu, Xiaoman Zhang, Ya Zhang, Yanfeng Wang, Weidi Xie
2023-01-05