In Computers in biology and medicine
BACKGROUND AND OBJECTIVES : Predicting patient response to treatment and survival in oncology is a prominent way towards precision medicine. To this end, radiomics has been proposed as a field of study where images are used instead of invasive methods. The first step in radiomic analysis in oncology is lesion segmentation. However, this task is time consuming and can be physician subjective. Automated tools based on supervised deep learning have made great progress in helping physicians. However, they are data hungry, and annotated data remains a major issue in the medical field where only a small subset of annotated images are available.
METHODS : In this work, we propose a multi-task, multi-scale learning framework to predict patient's survival and response. We show that the encoder can leverage multiple tasks to extract meaningful and powerful features that improve radiomic performance. We also show that subsidiary tasks serve as an inductive bias so that the model can better generalize.
RESULTS : Our model was tested and validated for treatment response and survival in esophageal and lung cancers, with an area under the ROC curve of 77% and 71% respectively, outperforming single-task learning methods.
CONCLUSIONS : Multi-task multi-scale learning enables higher performance of radiomic analysis by extracting rich information from intratumoral and peritumoral regions.
Amyar Amine, Modzelewski Romain, Vera Pierre, Morard Vincent, Ruan Su
2022-Oct-18
Deep learning, Image classification, Image segmentation, Multi-task learning, Positron Emission Tomography, Radiomics