Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

oncology Oncology

Piloting a Deep Learning Model for Predicting Nuclear BAP1 Immunohistochemical Expression of Uveal Melanoma from Hematoxylin-and-Eosin Sections.

In Translational vision science & technology

Background : Uveal melanoma (UM) is the most common primary intraocular malignancy in adults. Monosomy 3 and BAP1 mutation are strong prognostic factors predicting metastatic risk in UM. Nuclear BAP1 (nBAP1) expression is a close immunohistochemical surrogate for both genetic alterations. Not all laboratories perform routine BAP1 immunohistochemistry or genetic testing, and rely mainly on clinical information and anatomic/morphologic analyses for UM prognostication. The purpose of our study was to pilot deep learning (DL) techniques to predict nBAP1 expression on whole slide images (WSIs) of hematoxylin and eosin (H&E) stained UM sections.

Methods : One hundred forty H&E-stained UMs were scanned at 40 × magnification, using commercially available WSI image scanners. The training cohort comprised 66 BAP1+ and 74 BAP1- UM, with known chromosome 3 status and clinical outcomes. Nonoverlapping areas of three different dimensions (512 × 512, 1024 × 1024, and 2048 × 2048 pixels) for comparison were extracted from tumor regions in each WSI, and were resized to 256 × 256 pixels. Deep convolutional neural networks (Resnet18 pre-trained on Imagenet) and auto-encoder-decoders (U-Net) were trained to predict nBAP1 expression of these patches. Trained models were tested on the patches cropped from a test cohort of WSIs of 16 BAP1+ and 28 BAP1- UM cases.

Results : The trained model with best performance achieved area under the curve values of 0.90 for patches and 0.93 for slides on the test set.

Conclusions : Our results show the effectiveness of DL for predicting nBAP1 expression in UM on the basis of H&E sections only.

Translational Relevance : Our pilot demonstrates a high capacity of artificial intelligence-related techniques for automated prediction on the basis of histomorphology, and may be translatable into routine histology laboratories.

Zhang Hongrun, Kalirai Helen, Acha-Sagredo Amelia, Yang Xiaoyun, Zheng Yalin, Coupland Sarah E

2020-Sep

BAP1, artificial intelligence, choroidal melanoma, deep learning, hematoxylin-and-eosin (H&E), prognostication, uveal melanoma, whole slide imaging

General General

Applications of deep convolutional neural networks to predict length, circumference, and weight from mostly dewatered images of fish.

In Ecology and evolution

Simple biometric data of fish aid fishery management tasks such as monitoring the structure of fish populations and regulating recreational harvest. While these data are foundational to fishery research and management, the collection of length and weight data through physical handling of the fish is challenging as it is time consuming for personnel and can be stressful for the fish. Recent advances in imaging technology and machine learning now offer alternatives for capturing biometric data. To investigate the potential of deep convolutional neural networks to predict biometric data, several regressors were trained and evaluated on data stemming from the FishL™ Recognition System and manual measurements of length, girth, and weight. The dataset consisted of 694 fish from 22 different species common to Laurentian Great Lakes. Even with such a diverse dataset and variety of presentations by the fish, the regressors proved to be robust and achieved competitive mean percent errors in the range of 5.5 to 7.6% for length and girth on an evaluation dataset. Potential applications of this work could increase the efficiency and accuracy of routine survey work by fishery professionals and provide a means for longer-term automated collection of fish biometric data.

Bravata Nicholas, Kelly Dylan, Eickholt Jesse, Bryan Janine, Miehls Scott, Zielinski Dan

2020-Sep

General General

From leaf to label: A robust automated workflow for stomata detection.

In Ecology and evolution

Plant leaf stomata are the gatekeepers of the atmosphere-plant interface and are essential building blocks of land surface models as they control transpiration and photosynthesis. Although more stomatal trait data are needed to significantly reduce the error in these model predictions, recording these traits is time-consuming, and no standardized protocol is currently available. Some attempts were made to automate stomatal detection from photomicrographs; however, these approaches have the disadvantage of using classic image processing or targeting a narrow taxonomic entity which makes these technologies less robust and generalizable to other plant species. We propose an easy-to-use and adaptable workflow from leaf to label. A methodology for automatic stomata detection was developed using deep neural networks according to the state of the art and its applicability demonstrated across the phylogeny of the angiosperms.We used a patch-based approach for training/tuning three different deep learning architectures. For training, we used 431 micrographs taken from leaf prints made according to the nail polish method from herbarium specimens of 19 species. The best-performing architecture was tested on 595 images of 16 additional species spread across the angiosperm phylogeny.The nail polish method was successfully applied in 78% of the species sampled here. The VGG19 architecture slightly outperformed the basic shallow and deep architectures, with a confidence threshold equal to 0.7 resulting in an optimal trade-off between precision and recall. Applying this threshold, the VGG19 architecture obtained an average F-score of 0.87, 0.89, and 0.67 on the training, validation, and unseen test set, respectively. The average accuracy was very high (94%) for computed stomatal counts on unseen images of species used for training.The leaf-to-label pipeline is an easy-to-use workflow for researchers of different areas of expertise interested in detecting stomata more efficiently. The described methodology was based on multiple species and well-established methods so that it can serve as a reference for future work.

Meeus Sofie, Van den Bulcke Jan, Wyffels Francis

2020-Sep

VGG19, deep learning, deep neural networks, detection, herbarium, optical microscope images, plants, stomata, stomatal density

General General

Applications of Artificial Intelligence and Big Data Analytics in m-Health: A Healthcare System Perspective.

In Journal of healthcare engineering

Mobile health (m-health) is the term of monitoring the health using mobile phones and patient monitoring devices etc. It has been often deemed as the substantial breakthrough in technology in this modern era. Recently, artificial intelligence (AI) and big data analytics have been applied within the m-health for providing an effective healthcare system. Various types of data such as electronic health records (EHRs), medical images, and complicated text which are diversified, poorly interpreted, and extensively unorganized have been used in the modern medical research. This is an important reason for the cause of various unorganized and unstructured datasets due to emergence of mobile applications along with the healthcare systems. In this paper, a systematic review is carried out on application of AI and the big data analytics to improve the m-health system. Various AI-based algorithms and frameworks of big data with respect to the source of data, techniques used, and the area of application are also discussed. This paper explores the applications of AI and big data analytics for providing insights to the users and enabling them to plan, using the resources especially for the specific challenges in m-health, and proposes a model based on the AI and big data analytics for m-health. Findings of this paper will guide the development of techniques using the combination of AI and the big data as source for handling m-health data more effectively.

Khan Z Faizal, Alotaibi Sultan Refa

2020

General General

Classification of Alzheimer's Disease and Mild Cognitive Impairment Based on Cortical and Subcortical Features from MRI T1 Brain Images Utilizing Four Different Types of Datasets.

In Journal of healthcare engineering

Alzheimer's disease (AD) is one of the most common neurodegenerative illnesses (dementia) among the elderly. Recently, researchers have developed a new method for the instinctive analysis of AD based on machine learning and its subfield, deep learning. Recent state-of-the-art techniques consider multimodal diagnosis, which has been shown to achieve high accuracy compared to a unimodal prognosis. Furthermore, many studies have used structural magnetic resonance imaging (MRI) to measure brain volumes and the volume of subregions, as well as to search for diffuse changes in white/gray matter in the brain. In this study, T1-weighted structural MRI was used for the early classification of AD. MRI results in high-intensity visible features, making preprocessing and segmentation easy. To use this image modality, we acquired four types of datasets from each dataset's server. In this work, we downloaded 326 subjects from the National Research Center for Dementia homepage, 123 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) homepage, 121 subjects from the Alzheimer's Disease Repository Without Borders homepage, and 131 subjects from the National Alzheimer's Coordinating Center homepage. In our experiment, we used the multiatlas label propagation with expectation-maximization-based refinement segmentation method. We segmented the images into 138 anatomical morphometry images (in which 40 features belonged to subcortical volumes and the remaining 98 features belonged to cortical thickness). The entire dataset was split into a 70 : 30 (training and testing) ratio before classifying the data. A principal component analysis was used for dimensionality reduction. Then, the support vector machine radial basis function classifier was used for classification between two groups-AD versus health control (HC) and early mild cognitive impairment (MCI) (EMCI) versus late MCI (LMCI). The proposed method performed very well for all four types of dataset. For instance, for the AD versus HC group, the classifier achieved an area under curve (AUC) of more than 89% for each dataset. For the EMCI versus LMCI group, the classifier achieved an AUC of more than 80% for every dataset. Moreover, we also calculated Cohen kappa and Jaccard index statistical values for all datasets to evaluate the classification reliability. Finally, we compared our results with those of recently published state-of-the-art methods.

Toshkhujaev Saidjalol, Lee Kun Ho, Choi Kyu Yeong, Lee Jang Jae, Kwon Goo-Rak, Gupta Yubraj, Lama Ramesh Kumar

2020

oncology Oncology

An integrative deep learning framework for classifying molecular subtypes of breast cancer.

In Computational and structural biotechnology journal

Classification of breast cancer subtypes using multi-omics profiles is a difficult problem since the data sets are high-dimensional and highly correlated. Deep neural network (DNN) learning has demonstrated advantages over traditional methods as it does not require any hand-crafted features, but rather automatically extract features from raw data and efficiently analyze high-dimensional and correlated data. We aim to develop an integrative deep learning framework for classifying molecular subtypes of breast cancer. We collect copy number alteration and gene expression data measured on the same breast cancer patients from the Molecular Taxonomy of Breast Cancer International Consortium. We propose a deep learning model to integrate the omics datasets for predicting their molecular subtypes. The performance of our proposed DNN model is compared with some baseline models. Furthermore, we evaluate the misclassification of the subtypes using the learned deep features and explore their usefulness for clustering the breast cancer patients. We demonstrate that our proposed integrative deep learning model is superior to other deep learning and non-deep learning based models. Particularly, we get the best prediction result among the deep learning-based integration models when we integrate the two data sources using the concatenation layer in the models without sharing the weights. Using the learned deep features, we identify 6 breast cancer subgroups and show that Her2-enriched samples can be classified into more than one tumor subtype. Overall, the integrated model show better performance than those trained on individual data sources.

Mohaiminul Islam Md, Huang Shujun, Ajwad Rasif, Chi Chen, Wang Yang, Hu Pingzhao

2020

Breast cancer, Classification, Data integration, Deep learning, Omics data