ArXiv Preprint
Achieving accurate and automated tumor segmentation plays an important role
in both clinical practice and radiomics research. Segmentation in medicine is
now often performed manually by experts, which is a laborious, expensive and
error-prone task. Manual annotation relies heavily on the experience and
knowledge of these experts. In addition, there is much intra- and interobserver
variation. Therefore, it is of great significance to develop a method that can
automatically segment tumor target regions. In this paper, we propose a deep
learning segmentation method based on multimodal positron emission
tomography-computed tomography (PET-CT), which combines the high sensitivity of
PET and the precise anatomical information of CT. We design an improved spatial
attention network(ISA-Net) to increase the accuracy of PET or CT in detecting
tumors, which uses multi-scale convolution operation to extract feature
information and can highlight the tumor region location information and
suppress the non-tumor region location information. In addition, our network
uses dual-channel inputs in the coding stage and fuses them in the decoding
stage, which can take advantage of the differences and complementarities
between PET and CT. We validated the proposed ISA-Net method on two clinical
datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR)
dataset, and compared with other attention methods for tumor segmentation. The
DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that
ISA-Net method achieves better segmentation performance and has better
generalization. Conclusions: The method proposed in this paper is based on
multi-modal medical image tumor segmentation, which can effectively utilize the
difference and complementarity of different modes. The method can also be
applied to other multi-modal data or single-modal data by proper adjustment.
Zhengyong Huang, Sijuan Zou, Guoshuai Wang, Zixiang Chen, Hao Shen, Haiyan Wang, Na Zhang, Lu Zhang, Fan Yang, Haining Wangg, Dong Liang, Tianye Niu, Xiaohua Zhuc, Zhanli Hua
2022-11-04