ArXiv Preprint
Medical phrase grounding (MPG) aims to locate the most relevant region in a
medical image, given a phrase query describing certain medical findings, which
is an important task for medical image analysis and radiological diagnosis.
However, existing visual grounding methods rely on general visual features for
identifying objects in natural images and are not capable of capturing the
subtle and specialized features of medical findings, leading to sub-optimal
performance in MPG. In this paper, we propose MedRPG, an end-to-end approach
for MPG. MedRPG is built on a lightweight vision-language transformer encoder
and directly predicts the box coordinates of mentioned medical findings, which
can be trained with limited medical data, making it a valuable tool in medical
image analysis. To enable MedRPG to locate nuanced medical findings with better
region-phrase correspondences, we further propose Tri-attention Context
contrastive alignment (TaCo). TaCo seeks context alignment to pull both the
features and attention outputs of relevant region-phrase pairs close together
while pushing those of irrelevant regions far away. This ensures that the final
box prediction depends more on its finding-specific regions and phrases.
Experimental results on three MPG datasets demonstrate that our MedRPG
outperforms state-of-the-art visual grounding approaches by a large margin.
Additionally, the proposed TaCo strategy is effective in enhancing finding
localization ability and reducing spurious region-phrase correlations.
Zhihao Chen, Yang Zhou, Anh Tran, Junting Zhao, Liang Wan, Gideon Ooi, Lionel Cheng, Choon Hua Thng, Xinxing Xu, Yong Liu, Huazhu Fu
2023-03-14