Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

CeCILE - An Artificial Intelligence Based Cell-Detection for the Evaluation of Radiation Effects in Eucaryotic Cells.

In Frontiers in oncology

The fundamental basis in the development of novel radiotherapy methods is in-vitro cellular studies. To assess different endpoints of cellular reactions to irradiation like proliferation, cell cycle arrest, and cell death, several assays are used in radiobiological research as standard methods. For example, colony forming assay investigates cell survival and Caspase3/7-Sytox assay cell death. The major limitation of these assays is the analysis at a fixed timepoint after irradiation. Thus, not much is known about the reactions before or after the assay is performed. Additionally, these assays need special treatments, which influence cell behavior and health. In this study, a completely new method is proposed to tackle these challenges: A deep-learning algorithm called CeCILE (Cell Classification and In-vitro Lifecycle Evaluation), which is used to detect and analyze cells on videos obtained from phase-contrast microscopy. With this method, we can observe and analyze the behavior and the health conditions of single cells over several days after treatment, up to a sample size of 100 cells per image frame. To train CeCILE, we built a dataset by labeling cells on microscopic images and assign class labels to each cell, which define the cell states in the cell cycle. After successful training of CeCILE, we irradiated CHO-K1 cells with 4 Gy protons, imaged them for 2 days by a microscope equipped with a live-cell-imaging set-up, and analyzed the videos by CeCILE and by hand. From analysis, we gained information about cell numbers, cell divisions, and cell deaths over time. We could show that similar results were achieved in the first proof of principle compared with colony forming and Caspase3/7-Sytox assays in this experiment. Therefore, CeCILE has the potential to assess the same endpoints as state-of-the-art assays but gives extra information about the evolution of cell numbers, cell state, and cell cycle. Additionally, CeCILE will be extended to track individual cells and their descendants throughout the whole video to follow the behavior of each cell and the progeny after irradiation. This tracking method is capable to put radiobiologic research to the next level to obtain a better understanding of the cellular reactions to radiation.

Rudigkeit Sarah, Reindl Julian B, Matejka Nicole, Ramson Rika, Sammer Matthias, Dollinger G√ľnther, Reindl Judith


cell-tracking, deep-learning, lifecycle analysis, phase-contrast microscopy, radiobiology

Radiology Radiology

Deep Neural Network Analysis of Pathology Images With Integrated Molecular Data for Enhanced Glioma Classification and Grading.

In Frontiers in oncology

Gliomas are primary brain tumors that originate from glial cells. Classification and grading of these tumors is critical to prognosis and treatment planning. The current criteria for glioma classification in central nervous system (CNS) was introduced by World Health Organization (WHO) in 2016. This criteria for glioma classification requires the integration of histology with genomics. In 2017, the Consortium to Inform Molecular and Practical Approaches to CNS Tumor Taxonomy (cIMPACT-NOW) was established to provide up-to-date recommendations for CNS tumor classification, which in turn the WHO is expected to adopt in its upcoming edition. In this work, we propose a novel glioma analytical method that, for the first time in the literature, integrates a cellularity feature derived from the digital analysis of brain histopathology images integrated with molecular features following the latest WHO criteria. We first propose a novel over-segmentation strategy for region-of-interest (ROI) selection in large histopathology whole slide images (WSIs). A Deep Neural Network (DNN)-based classification method then fuses molecular features with cellularity features to improve tumor classification performance. We evaluate the proposed method with 549 patient cases from The Cancer Genome Atlas (TCGA) dataset for evaluation. The cross validated classification accuracies are 93.81% for lower-grade glioma (LGG) and high-grade glioma (HGG) using a regular DNN, and 73.95% for LGG II and LGG III using a residual neural network (ResNet) DNN, respectively. Our experiments suggest that the type of deep learning has a significant impact on tumor subtype discrimination between LGG II vs. LGG III. These results outperform state-of-the-art methods in classifying LGG II vs. LGG III and offer competitive performance in distinguishing LGG vs. HGG in the literature. In addition, we also investigate molecular subtype classification using pathology images and cellularity information. Finally, for the first time in literature this work shows promise for cellularity quantification to predict brain tumor grading for LGGs with IDH mutations.

Pei Linmin, Jones Karra A, Shboul Zeina A, Chen James Y, Iftekharuddin Khan M


IDH mutation, brain tumor classification and grading, cellularity, central nervous system tumor, deep neural network, glioma, molecular, radiomics

oncology Oncology

Potential and limitations of radiomics in neuro-oncology.

In Journal of clinical neuroscience : official journal of the Neurosurgical Society of Australasia

Radiomics seeks to apply classical methods of image processing to obtain quantitative parameters from imaging. Derived features are subsequently fed into algorithmic models to aid clinical decision making. The application of radiomics and machine learning techniques to clinical medicine remains in its infancy. The great potential of radiomics lies in its objective, granular approach to investigating clinical imaging. In neuro-oncology, advanced machine learning techniques, particularly deep learning, are at the forefront of new discoveries in the field. However, despite the great promise of machine learning aided radiomic approaches, the current use remains confined to scholarly research, without real-world deployment in neuro-oncology. The paucity of data, inconsistencies in preprocessing, radiomic feature instability, and the rarity of the events of interest are critical barriers to clinical translation. In this article, we will outline the major steps in the process of radiomics, as well as review advances and challenges in the field as they pertain to neuro-oncology.

Taha Birra, Boley Daniel, Sun Ju, Chen Clark


Deep learning, Imaging, Machine learning, Neuro-oncology, Radiomics

General General

Fusion of AI techniques to tackle COVID-19 pandemic: models, incidence rates, and future trends.

In Multimedia systems

The COVID-19 pandemic is rapidly spreading across the globe and infected millions of people that take hundreds of thousands of lives. Over the years, the role of Artificial intelligence (AI) has been on the rise as its algorithms are getting more and more accurate and it is thought that its role in strengthening the existing healthcare system will be the most profound. Moreover, the pandemic brought an opportunity to showcase AI and healthcare integration potentials as the current infrastructure worldwide is overwhelmed and crumbling. Due to AI's flexibility and adaptability, it can be used as a tool to tackle COVID-19. Motivated by these facts, in this paper, we surveyed how the AI techniques can handle the COVID-19 pandemic situation and present the merits and demerits of these techniques. This paper presents a comprehensive end-to-end review of all the AI-techniques that can be used to tackle all areas of the pandemic. Further, we systematically discuss the issues of the COVID-19, and based on the literature review, we suggest their potential countermeasures using AI techniques. In the end, we analyze various open research issues and challenges associated with integrating the AI techniques in the COVID-19.

Shah Het, Shah Saiyam, Tanwar Sudeep, Gupta Rajesh, Kumar Neeraj


AI, COVID-19, Deep learning, Healthcare, Machine learning

General General

MIMO: Mutual Integration of Patient Journey and Medical Ontology for Healthcare Representation Learning

ArXiv Preprint

Healthcare representation learning on the Electronic Health Record (EHR) is seen as crucial for predictive analytics in the medical field. Many natural language processing techniques, such as word2vec, RNN and self-attention, have been adapted for use in hierarchical and time stamped EHR data, but fail when they lack either general or task-specific data. Hence, some recent works train healthcare representations by incorporating medical ontology (a.k.a. knowledge graph), by self-supervised tasks like diagnosis prediction, but (1) the small-scale, monotonous ontology is insufficient for robust learning, and (2) critical contexts or dependencies underlying patient journeys are never exploited to enhance ontology learning. To address this, we propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics. Specifically, it consists of task-specific representation learning and graph-embedding modules to learn both patient journey and medical ontology interactively. Consequently, this creates a mutual integration to benefit both healthcare representation learning and medical ontology embedding. Moreover, such integration is achieved by a joint training of both task-specific predictive and ontology-based disease typing tasks based on fused embeddings of the two modules. Experiments conducted on two real-world diagnosis prediction datasets show that, our healthcare representation model MIMO not only achieves better predictive results than previous state-of-the-art approaches regardless of sufficient or insufficient training data, but also derives more interpretable embeddings of diagnoses.

Xueping Peng, and Guodong Long, Tao Shen, Sen Wang, Zhendong Niu, Chengqi Zhang


General General

Easily Created Prediction Model Using Automated Artificial Intelligence Framework (Prediction One, Sony Network Communications Inc., Tokyo, Japan) for Subarachnoid Hemorrhage Outcomes Treated by Coiling and Delayed Cerebral Ischemia.

In Cureus

Introduction Reliable prediction models of subarachnoid hemorrhage (SAH) outcomes and delayed cerebral ischemia (DCI) are needed to decide the treatment strategy. Automated artificial intelligence (AutoAI) is attractive, but there are few reports on AutoAI-based models for SAH functional outcomes and DCI. We herein made models using an AutoAI framework, Prediction One (Sony Network Communications Inc., Tokyo, Japan), and compared it to other previous statistical prediction scores. Methods We used an open dataset of 298 SAH patients, who were with non-severe neurological grade and treated by coiling. Modified Rankin Scale 0-3 at six months was defined as a favorable functional outcome and DCI occurrence as another outcome. We randomly divided them into a 248-patient training dataset and a 50-patient test dataset. Prediction One made the model using training dataset with 5-fold cross-validation. We evaluated the model using the test dataset and compared the area under the curves (AUCs) of the created models. Those of the modified SAFIRE score and the Fisher computed tomography (CT) scale to predict the outcomes. Results The AUCs of the AutoAI-based models for functional outcome in the training and test dataset were 0.994 and 0.801, and those for the DCI occurrence were 0.969 and 0.650. AUCs for functional outcome calculated using modified SAFIRE score were 0.844 and 0.892. Those for the DCI occurrence calculated using the Fisher CT scale were 0.577 and 0.544. Conclusions We easily and quickly made AutoAI-based prediction models. The models' AUCs were not inferior to the previous prediction models despite the easiness.

Katsuki Masahito, Kawamura Shin, Koh Akihito


automated artificial intelligence (autoai), deep learning (dl), delayed cerebral ischemia (dci), machine learning (ml), subarachnoid hemorrhage (sah)