Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Ophthalmology Ophthalmology

Deep learning-enabled medical computer vision.

In NPJ digital medicine

A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.

Esteva Andre, Chou Katherine, Yeung Serena, Naik Nikhil, Madani Ali, Mottaghi Ali, Liu Yun, Topol Eric, Dean Jeff, Socher Richard

2021-Jan-08

Surgery Surgery

Development and validation of an interpretable neural network for prediction of postoperative in-hospital mortality.

In NPJ digital medicine

While deep neural networks (DNNs) and other machine learning models often have higher accuracy than simpler models like logistic regression (LR), they are often considered to be "black box" models and this lack of interpretability and transparency is considered a challenge for clinical adoption. In healthcare, intelligible models not only help clinicians to understand the problem and create more targeted action plans, but also help to gain the clinicians' trust. One method of overcoming the limited interpretability of more complex models is to use Generalized Additive Models (GAMs). Standard GAMs simply model the target response as a sum of univariate models. Inspired by GAMs, the same idea can be applied to neural networks through an architecture referred to as Generalized Additive Models with Neural Networks (GAM-NNs). In this manuscript, we present the development and validation of a model applying the concept of GAM-NNs to allow for interpretability by visualizing the learned feature patterns related to risk of in-hospital mortality for patients undergoing surgery under general anesthesia. The data consists of 59,985 patients with a feature set of 46 features extracted at the end of surgery to which we added previously not included features: total anesthesia case time (1 feature); the time in minutes spent with mean arterial pressure (MAP) below 40, 45, 50, 55, 60, and 65 mmHg during surgery (6 features); and Healthcare Cost and Utilization Project (HCUP) Code Descriptions of the Primary current procedure terminology (CPT) codes (33 features) for a total of 86 features. All data were randomly split into 80% for training (n = 47,988) and 20% for testing (n = 11,997) prior to model development. Model performance was compared to a standard LR model using the same features as the GAM-NN. The data consisted of 59,985 surgical records, and the occurrence of in-hospital mortality was 0.81% in the training set and 0.72% in the testing set. The GAM-NN model with HCUP features had the highest area under the curve (AUC) 0.921 (0.895-0.95). Overall, both GAM-NN models had higher AUCs than LR models, however, had lower average precisions. The LR model without HCUP features had the highest average precision 0.217 (0.136-0.31). To assess the interpretability of the GAM-NNs, we then visualized the learned contributions of the GAM-NNs and compared against the learned contributions of the LRs for the models with HCUP features. Overall, we were able to demonstrate that our proposed generalized additive neural network (GAM-NN) architecture is able to (1) leverage a neural network's ability to learn nonlinear patterns in the data, which is more clinically intuitive, (2) be interpreted easily, making it more clinically useful, and (3) maintain model performance as compared to previously published DNNs.

Lee Christine K, Samad Muntaha, Hofer Ira, Cannesson Maxime, Baldi Pierre

2021-Jan-08

Pathology Pathology

Convolutional autoencoder based model HistoCAE for segmentation of viable tumor regions in liver whole-slide images.

In Scientific reports ; h5-index 158.0

Liver cancer is one of the leading causes of cancer deaths in Asia and Africa. It is caused by the Hepatocellular carcinoma (HCC) in almost 90% of all cases. HCC is a malignant tumor and the most common histological type of the primary liver cancers. The detection and evaluation of viable tumor regions in HCC present an important clinical significance since it is a key step to assess response of chemoradiotherapy and tumor cell proportion in genetic tests. Recent advances in computer vision, digital pathology and microscopy imaging enable automatic histopathology image analysis for cancer diagnosis. In this paper, we present a multi-resolution deep learning model HistoCAE for viable tumor segmentation in whole-slide liver histopathology images. We propose convolutional autoencoder (CAE) based framework with a customized reconstruction loss function for image reconstruction, followed by a classification module to classify each image patch as tumor versus non-tumor. The resulting patch-based prediction results are spatially combined to generate the final segmentation result for each WSI. Additionally, the spatially organized encoded feature map derived from small image patches is used to compress the gigapixel whole-slide images. Our proposed model presents superior performance to other benchmark models with extensive experiments, suggesting its efficacy for viable tumor area segmentation with liver whole-slide images.

Roy Mousumi, Kong Jun, Kashyap Satyananda, Pastore Vito Paolo, Wang Fusheng, Wong Ken C L, Mukherjee Vandana

2021-Jan-08

Public Health Public Health

Breath biopsy of breast cancer using sensor array signals and machine learning analysis.

In Scientific reports ; h5-index 158.0

Breast cancer causes metabolic alteration, and volatile metabolites in the breath of patients may be used to diagnose breast cancer. The objective of this study was to develop a new breath test for breast cancer by analyzing volatile metabolites in the exhaled breath. We collected alveolar air from breast cancer patients and non-cancer controls and analyzed the volatile metabolites with an electronic nose composed of 32 carbon nanotubes sensors. We used machine learning techniques to build prediction models for breast cancer and its molecular phenotyping. Between July 2016 and June 2018, we enrolled a total of 899 subjects. Using the random forest model, the prediction accuracy of breast cancer in the test set was 91% (95% CI: 0.85-0.95), sensitivity was 86%, specificity was 97%, positive predictive value was 97%, negative predictive value was 97%, the area under the receiver operating curve was 0.99 (95% CI: 0.99-1.00), and the kappa value was 0.83. The leave-one-out cross-validated discrimination accuracy and reliability of molecular phenotyping of breast cancer were 88.5 ± 12.1% and 0.77 ± 0.23, respectively. Breath tests with electronic noses can be applied intraoperatively to discriminate breast cancer and molecular subtype and support the medical staff to choose the best therapeutic decision.

Yang Hsiao-Yu, Wang Yi-Chia, Peng Hsin-Yi, Huang Chi-Hsiang

2021-Jan-08

General General

Measurement of emotional states of zebrafish through integrated analysis of motion and respiration using bioelectric signals.

In Scientific reports ; h5-index 158.0

Fear, anxiety, and preference in fish are generally evaluated by video-based behavioural analyses. We previously proposed a system that can measure bioelectrical signals, called ventilatory signals, using a 126-electrode array placed at the bottom of an aquarium and achieved cameraless real-time analysis of motion and ventilation. In this paper, we propose a method to evaluate the emotional state of fish by combining the motion and ventilatory indices obtained with the proposed system. In the experiments, fear/anxiety and appetitive behaviour were induced using alarm pheromone and ethanol, respectively. We also found that the emotional state of the zebrafish can be expressed on the principal component (PC) space extracted from the defined indices. The three emotional states were discriminated using a model-based machine learning method by feeding the PCs. Based on discrimination performed every 5 s, the F-score between the three emotional states were as follows: 0.84 for the normal state, 0.76 for the fear/anxiety state, and 0.59 for the appetitive behaviour. These results indicate the effectiveness of combining physiological and motional indices to discriminate the emotional states of zebrafish.

Soh Zu, Matsuno Motoki, Yoshida Masayuki, Furui Akira, Tsuji Toshio

2021-Jan-08

General General

Modular machine learning for Alzheimer's disease classification from retinal vasculature.

In Scientific reports ; h5-index 158.0

Alzheimer's disease is the leading cause of dementia. The long progression period in Alzheimer's disease provides a possibility for patients to get early treatment by having routine screenings. However, current clinical diagnostic imaging tools do not meet the specific requirements for screening procedures due to high cost and limited availability. In this work, we took the initiative to evaluate the retina, especially the retinal vasculature, as an alternative for conducting screenings for dementia patients caused by Alzheimer's disease. Highly modular machine learning techniques were employed throughout the whole pipeline. Utilizing data from the UK Biobank, the pipeline achieved an average classification accuracy of 82.44%. Besides the high classification accuracy, we also added a saliency analysis to strengthen this pipeline's interpretability. The saliency analysis indicated that within retinal images, small vessels carry more information for diagnosing Alzheimer's diseases, which aligns with related studies.

Tian Jianqiao, Smith Glenn, Guo Han, Liu Boya, Pan Zehua, Wang Zijie, Xiong Shuangyu, Fang Ruogu

2021-Jan-08