In Physical and engineering sciences in medicine
BACKGROUND : Optical scanning technologies are increasingly being utilised to supplement treatment workflows in radiation oncology, such as surface-guided radiotherapy or 3D printing custom bolus. One limitation of optical scanning devices is the absence of internal anatomical information of the patient being scanned. As a result, conventional radiation therapy treatment planning using this imaging modality is not feasible. Deep learning is useful for automating various manual tasks in radiation oncology, most notably, organ segmentation and treatment planning. Deep learning models have also been used to transform MRI datasets into synthetic CT datasets, facilitating the development of MRI-only radiation therapy planning.
AIMS : To train a pix2pix generative adversarial network to transform 3D optical scan data into estimated MRI datasets for a given patient to provide additional anatomical data for a select few radiation therapy treatment sites. The proposed network may provide useful anatomical information for treatment planning of surface mould brachytherapy, total body irradiation, and total skin electron therapy, for example, without delivering any imaging dose.
METHODS : A 2D pix2pix GAN was trained on 15,000 axial MRI slices of healthy adult brains paired with corresponding external mask slices. The model was validated on a further 5000 previously unseen external mask slices. The predictions were compared with the "ground-truth" MRI slices using the multi-scale structural similarity index (MSSI) metric. A certified neuro-radiologist was subsequently consulted to provide an independent review of the model's performance in terms of anatomical accuracy and consistency. The network was then applied to a 3D photogrammetry scan of a test subject to demonstrate the feasibility of this novel technique.
RESULTS : The trained pix2pix network predicted MRI slices with a mean MSSI of 0.831 ± 0.057 for the 5000 validation images indicating that it is possible to estimate a significant proportion of a patient's gross cranial anatomy from a patient's exterior contour. When independently reviewed by a certified neuro-radiologist, the model's performance was described as "quite amazing, but there are limitations in the regions where there is wide variation within the normal population." When the trained network was applied to a 3D model of a human subject acquired using optical photogrammetry, the network could estimate the corresponding MRI volume for that subject with good qualitative accuracy. However, a ground-truth MRI baseline was not available for quantitative comparison.
CONCLUSIONS : A deep learning model was developed, to transform 3D optical scan data of a patient into an estimated MRI volume, potentially increasing the usefulness of optical scanning in radiation therapy planning. This work has demonstrated that much of the human cranial anatomy can be predicted from the external shape of the head and may provide an additional source of valuable imaging data. Further research is required to investigate the feasibility of this approach for use in a clinical setting and further improve the model's accuracy.
Douglass Michael, Gorayski Peter, Patel Sandy, Santos Alexandre
2023-Feb-08
3D scan, Deep learning, GAN, MRI, Photogrammetry, Radiation oncology, Synthetic, Treatment planning, pix2pix