ArXiv Preprint
An important component of human analysis of medical images and their context
is the ability to relate newly seen things to related instances in our memory.
In this paper we mimic this ability by using multi-modal retrieval augmentation
and apply it to several tasks in chest X-ray analysis. By retrieving similar
images and/or radiology reports we expand and regularize the case at hand with
additional knowledge, while maintaining factual knowledge consistency. The
method consists of two components. First, vision and language modalities are
aligned using a pre-trained CLIP model. To enforce that the retrieval focus
will be on detailed disease-related content instead of global visual appearance
it is fine-tuned using disease class information. Subsequently, we construct a
non-parametric retrieval index, which reaches state-of-the-art retrieval
levels. We use this index in our downstream tasks to augment image
representations through multi-head attention for disease classification and
report retrieval. We show that retrieval augmentation gives considerable
improvements on these tasks. Our downstream report retrieval even shows to be
competitive with dedicated report generation methods, paving the path for this
method in medical imaging.
Tom van Sonsbeek, Marcel Worring
2023-02-22