Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In IEEE transactions on medical imaging ; h5-index 74.0

Main coronary segmentation from the X-ray angiography images is important for the computer-aided diagnosis and treatment of coronary disease. However, it confronts the challenge at three different image granularities (the semantic, surrounding, and local levels). The challenge includes the semantic confusion between the main and collateral vessels, low contrast between the foreground vessel and background surroundings, and local ambiguity near the vessel boundaries. The traditional hand-crafted feature-based methods may be insufficient because they may lack the semantic relationship information and may not distinguish the main and collateral vessels. The existing deep learning-based methods seem to have issues due to the deficiency in the long-distance semantic relationship capture, the foreground and background interference adaptability, and the boundary detail information preservation. To solve the main coronary segmentation challenge, we propose the progressive perception learning (PPL) framework to inspect these three different image granularities. Specifically, the PPL contains the context, interference, and boundary perception modules. The context perception is designed to focus on the main coronary vessel based on the semantic dependence capture among different coronary segments. The interference perception is designed to purify the feature maps based on the foreground vessel enhancement and background artifact suppression. The boundary perception is designed to highlight the boundary details based on boundary feature extraction through the intersection between the foreground and background predictions. Extensive experiments on 1085 subjects show that the PPL is effective (e.g., the overall Dice is greater than 95%), and superior to thirteen state-of-the-art coronary segmentation methods.

Zhang Hongwei, Gao Zhifan, Zhang Dong, Hau William Kongto, Zhang Heye

2022-Nov-03