Deep-learning-based image processing has emerged as a valuable tool in recent
years owing to its high performance. However, the quality of
deep-learning-based methods relies heavily on the amount of training data, and
the cost of acquiring a large amount of data is often prohibitive in medical
fields. Therefore, we performed CT modality conversion based on deep learning
requiring only a small number of unsupervised images. The proposed method is
based on generative adversarial networks (GANs) with several extensions
tailored for CT images. This method emphasizes the preservation of the
structure in the processed images and reduction in the amount of training data.
This method was applied to realize the conversion of mega-voltage computed
tomography (MVCT) to kilo-voltage computed tomography (kVCT) images. Training
was performed using several datasets acquired from patients with head and neck
cancer. The size of the datasets ranged from 16 slices (for two patients) to
2745 slices (for 137 patients) of MVCT and 2824 slices of kVCT for 98 patients.
The quality of the processed MVCT images was considerably enhanced, and the
structural changes in the images were minimized. With an increase in the size
of training data, the image quality exhibited a satisfactory convergence from a
few hundred slices. In addition to statistical and visual evaluations, these
results were clinically evaluated by medical doctors in terms of the accuracy
of contouring. We developed an MVCT to kVCT conversion model based on deep
learning, which can be trained using a few hundred unpaired images. The
stability of the model against the change in the data size was demonstrated.
This research promotes the reliable use of deep learning in clinical medicine
by partially answering the commonly asked questions: "Is our data enough? How
much data must we prepare?"
Sho Ozaki, Shizuo Kaji, Kanabu Nawa, Toshikazu Imae, Atsushi Aoki, Takahiro Nakamoto, Takeshi Ohta, Yuki Nozawa, Hideomi Yamashita, Akihiro Haga, Keiichi Nakagawa