In Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society
The accurate whole heart segmentation (WHS) of multi-modality medical images including magnetic resonance image (MRI) and computed tomography (CT) plays an important role in many clinical applications, such as accurate preoperative diagnosis planning and intraoperative treatment. Considering that the shape information of each component of the whole heart is complementary, we can extract multi-modality features and obtain the final segmentation results by fusing MRI and CT images. In this paper, we proposed a multi-modality transfer learning network with adversarial training (MMTLNet) for 3D multi-modality whole heart segmentation. Firstly, the network transfers the source domain (MRI domain) to the target domain (CT domain) by reconstructing the MRI images with a generator network and optimizing the reconstructed MRI images with a discriminator network, which enables us to fuse the MRI images with CT images to fully utilize the useful information from images in multi-modality for segmentation task. Secondly, to retain the useful information and remove the redundant information for accurate segmentation, we introduce the spatial attention mechanism into the backbone connection of UNet network to optimize the feature extraction between layers, and add channel attention mechanism at the jump connection to optimize the information extracted from the low-level feature map. Thirdly, we propose a new loss function in the adversarial training by introducing a weighted coefficient to distribute the proportion between Dice coefficient loss and generator loss, which can not only ensure the images to be correctly transferred from MRI domain to CT domain, but also achieve accurate segmentation with the transferred domain. We extensively evaluated our method on the data set of the multi-modality whole heart segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017. The dice values of whole heart segmentation are 0.914 (CT images) and 0.890 (MRI images), which are both higher than the state-of-the-art.
Liao Xiangyun, Qian Yinling, Chen Yilong, Xiong Xueying, Wang Qiong, Heng Pheng-Ann
Deep learning, Multi-modality whole heart segmentation, Transfer learning