Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

ArXiv Preprint

To facilitate both the detection and the interpretation of findings in chest X-rays, comparison with a previous image of the same patient is very valuable to radiologists. Today, the most common approach for deep learning methods to automatically inspect chest X-rays disregards the patient history and classifies only single images as normal or abnormal. Nevertheless, several methods for assisting in the task of comparison through image registration have been proposed in the past. However, as we illustrate, they tend to miss specific types of pathological changes like cardiomegaly and effusion. Due to assumptions on fixed anatomical structures or their measurements of registration quality they tend to produce unnaturally deformed warp fields impacting visualization of the difference image between moving and fixed images. To overcome these limitations, we are the first to use a new paradigm based on individual rib pair segmentation for anatomy penalized registration, which proves a natural way to limit folding of the warp field, especially beneficial for image pairs with large pathological changes. We show that it is possible to develop a deep learning powered solution that can visualize what other methods overlook on a large data set of paired public images, starting from less than 25 fully labeled and 50 partly labeled training images, employing sequential instance memory segmentation with hole dropout, weak labeling, coarse-to-fine refinement and Gaussian mixture model histogram matching. We statistically evaluate the benefits of our method over the SOTA and highlight the limits of currently used metrics for registration of chest X-rays.

Astrid Berg, Eva Vandersmissen, Maria Wimmer, David Major, Theresa Neubauer, Dimitrios Lenis, Jeroen Cant, Annemiek Snoeckx, Katja Bühler

2023-01-23