ArXiv Preprint
Surgery for gliomas (intrinsic brain tumors), especially when low-grade, is
challenging due to the infiltrative nature of the lesion. Currently, no
real-time, intra-operative, label-free and wide-field tool is available to
assist and guide the surgeon to find the relevant demarcations for these
tumors. While marker-based methods exist for the high-grade glioma case, there
is no convenient solution available for the low-grade case; thus, marker-free
optical techniques represent an attractive option. Although RGB imaging is a
standard tool in surgical microscopes, it does not contain sufficient
information for tissue differentiation. We leverage the richer information from
hyperspectral imaging (HSI), acquired with a snapscan camera in the 468-787 nm
range, coupled to a surgical microscope, to build a deep-learning-based
diagnostic tool for cancer resection with potential for intra-operative
guidance. However, the main limitation of the HSI snapscan camera is the image
acquisition time, limiting its widespread deployment in the operation theater.
Here, we investigate the effect of HSI channel reduction and pre-selection to
scope the design space for the development of cheaper and faster sensors.
Neural networks are used to identify the most important spectral channels for
tumor tissue differentiation, optimizing the trade-off between the number of
channels and precision to enable real-time intra-surgical application. We
evaluate the performance of our method on a clinical dataset that was acquired
during surgery on five patients. By demonstrating the possibility to
efficiently detect low-grade glioma, these results can lead to better cancer
resection demarcations, potentially improving treatment effectiveness and
patient outcome.
Tommaso Giannantonio, Anna Alperovich, Piercosimo Semeraro, Manfredo Atzori, Xiaohan Zhang, Christoph Hauger, Alexander Freytag, Siri Luthman, Roeland Vandebriel, Murali Jayapala, Lien Solie, Steven de Vleeschouwer
2023-02-06