Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society

Classification of subtype and grade is imperative in the clinical diagnosis and prognosis of cancer. Many deep learning-based studies related to cancer classification are based on pathology and genomics. However, most of them are late fusion-based and require full supervision in pathology image analysis. To address these problems, we present an integrated framework for cancer classification with pathology and genomics data. This framework consists of two major parts, a weakly supervised model for extracting patch features from whole slide images (WSIs), and a hierarchical multimodal fusion model. The weakly supervised model can make full use of WSI labels, and mitigate the effects of label noises by the self-training strategy. The generic multimodal fusion model is capable of capturing deep interaction information through multi-level attention mechanisms and controlling the expressiveness of each modal representation. We validate our approach on glioma and lung cancer datasets from The Cancer Genome Atlas (TCGA). The results demonstrate that the proposed method achieves superior performance compared to state-of-the-art methods, with the competitive AUC of 0.872 and 0.977 on these two datasets respectively. This paper establishes insight on how to build deep networks on multimodal biomedical data and proposes a more general framework for pathology image analysis without pixel-level annotation.

Qiu Lu, Zhao Lu, Hou Runping, Zhao Wangyuan, Zhang Shunan, Lin Zefan, Teng Haohua, Zhao Jun

2023-Jan-10

Cancer classification, Multimodal fusion, Weakly supervised learning, Whole-slide image analysis