Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Journal of neurosurgery ; h5-index 64.0

OBJECTIVE : Automatic segmentation of vestibular schwannomas (VSs) from MRI could significantly improve clinical workflow and assist in patient management. Accurate tumor segmentation and volumetric measurements provide the best indicators to detect subtle VS growth, but current techniques are labor intensive and dedicated software is not readily available within the clinical setting. The authors aim to develop a novel artificial intelligence (AI) framework to be embedded in the clinical routine for automatic delineation and volumetry of VS.

METHODS : Imaging data (contrast-enhanced T1-weighted [ceT1] and high-resolution T2-weighted [hrT2] MR images) from all patients meeting the study's inclusion/exclusion criteria who had a single sporadic VS treated with Gamma Knife stereotactic radiosurgery were used to create a model. The authors developed a novel AI framework based on a 2.5D convolutional neural network (CNN) to exploit the different in-plane and through-plane resolutions encountered in standard clinical imaging protocols. They used a computational attention module to enable the CNN to focus on the small VS target and propose a supervision on the attention map for more accurate segmentation. The manually segmented target tumor volume (also tested for interobserver variability) was used as the ground truth for training and evaluation of the CNN. We quantitatively measured the Dice score, average symmetric surface distance (ASSD), and relative volume error (RVE) of the automatic segmentation results in comparison to manual segmentations to assess the model's accuracy.

RESULTS : Imaging data from all eligible patients (n = 243) were randomly split into 3 nonoverlapping groups for training (n = 177), hyperparameter tuning (n = 20), and testing (n = 46). Dice, ASSD, and RVE scores were measured on the testing set for the respective input data types as follows: ceT1 93.43%, 0.203 mm, 6.96%; hrT2 88.25%, 0.416 mm, 9.77%; combined ceT1/hrT2 93.68%, 0.199 mm, 7.03%. Given a margin of 5% for the Dice score, the automated method was shown to achieve statistically equivalent performance in comparison to an annotator using ceT1 images alone (p = 4e-13) and combined ceT1/hrT2 images (p = 7e-18) as inputs.

CONCLUSIONS : The authors developed a robust AI framework for automatically delineating and calculating VS tumor volume and have achieved excellent results, equivalent to those achieved by an independent human annotator. This promising AI technology has the potential to improve the management of patients with VS and potentially other brain tumors.

Shapey Jonathan, Wang Guotai, Dorent Reuben, Dimitriadis Alexis, Li Wenqi, Paddick Ian, Kitchen Neil, Bisdas Sotirios, Saeed Shakeel R, Ourselin Sebastien, Bradford Robert, Vercauteren Tom

2019-Dec-06

AI = artificial intelligence, ASSD = average symmetric surface distance, CNN = convolutional neural network, DL = deep learning, GK = Gamma Knife, HDL = hardness-weighted Dice loss, MRI, RVE = relative volume error, SRS = stereotactic radiosurgery, SpvA = supervised attention module, VS = vestibular schwannoma, artificial intelligence, ceT1 = contrast-enhanced T1-weighted, convolutional neural network, hrT2 = high-resolution T2-weighted, oncology, segmentation, tumor, vestibular schwannoma