Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Category articles

General General

Automatic Crack Detection on Road Pavements Using Encoder-Decoder Architecture.

In Materials (Basel, Switzerland)

Automatic crack detection from images is an important task that is adopted to ensure road safety and durability for Portland cement concrete (PCC) and asphalt concrete (AC) pavement. Pavement failure depends on a number of causes including water intrusion, stress from heavy loads, and all the climate effects. Generally, cracks are the first distress that arises on road surfaces and proper monitoring and maintenance to prevent cracks from spreading or forming is important. Conventional algorithms to identify cracks on road pavements are extremely time-consuming and high cost. Many cracks show complicated topological structures, oil stains, poor continuity, and low contrast, which are difficult for defining crack features. Therefore, the automated crack detection algorithm is a key tool to improve the results. Inspired by the development of deep learning in computer vision and object detection, the proposed algorithm considers an encoder-decoder architecture with hierarchical feature learning and dilated convolution, named U-Hierarchical Dilated Network (U-HDN), to perform crack detection in an end-to-end method. Crack characteristics with multiple context information are automatically able to learn and perform end-to-end crack detection. Then, a multi-dilation module embedded in an encoder-decoder architecture is proposed. The crack features of multiple context sizes can be integrated into the multi-dilation module by dilation convolution with different dilatation rates, which can obtain much more cracks information. Finally, the hierarchical feature learning module is designed to obtain a multi-scale features from the high to low- level convolutional layers, which are integrated to predict pixel-wise crack detection. Some experiments on public crack databases using 118 images were performed and the results were compared with those obtained with other methods on the same images. The results show that the proposed U-HDN method achieves high performance because it can extract and fuse different context sizes and different levels of feature maps than other algorithms.

Fan Zhun, Li Chong, Chen Ying, Wei Jiahong, Loprencipe Giuseppe, Chen Xiaopeng, Di Mascio Paola

2020-Jul-02

U-net, automatic crack detection, deep learning, dilated Convolution, encoder-decoder, hierarchical feature, pavement cracking

General General

Consistency of Medical Data Using Intelligent Neuron Faster R-CNN Algorithm for Smart Health Care Application.

In Healthcare (Basel, Switzerland)

The purpose of this study is to increase interest in health as human life is extended in modern society. Hence, many people in hospitals produce much medical data (EMR, PACS, OCS, EHR, MRI, X-ray) after treatment. Medical data are stored as structured and unstructured data. However, many medical data are causing errors, omissions and mistakes in the process of reading. This behavior is very important in dealing with human life and sometimes leads to medical accidents due to physician errors. Therefore, this research is conducted through the CNN intelligent agent cloud architecture to verify errors in reading existing medical image data. To reduce the error rule when reading medical image data, a faster R-CNN intelligent agent cloud architecture is proposed. It shows the result of increasing errors of existing error reading by more than 1.4 times (140%). In particular, it is an algorithm that analyses data stored by actual existing medical data through Conv feature map using deep ConvNet and ROI Projection. The data were verified using about 120,000 databases. It uses data to examine human lungs. In addition, the experimental environment established an environment that can handle GPU's high performance and NVIDIA SLI multi-OS and multiple Quadro GPUs were used. In this experiment, the verification data composition was verified and randomly extracted from about 120,000 medical records and the similarity compared to the original data were measured by comparing about 40% of the extracted images. Finally, we want to reduce and verify the error rate of medical data reading.

Kim Seong-Kyu, Huh Jun-Ho

2020-Jun-25

Intelligent agent, artificial intelligence, cloud architecture, electronic medical record, health care system, neuron computer

General General

An Artificial Intelligence Approach for Italian EVOO Origin Traceability through an Open Source IoT Spectrometer.

In Foods (Basel, Switzerland)

Extra virgin olive oil (EVOO) represents a crucial ingredient of the Mediterranean diet. Being a first-choice product, consumers should be guaranteed its quality and geographical origin, justifying the high purchasing cost. For this reason, it is important to have new reliable tools able to classify products according to their geographical origin. The aim of this work was to demonstrate the efficiency of an open source visible and near infra-red (VIS-NIR) spectrophotometer, relying on a specific app, in assessing olive oil geographical origin. Thus, 67 Italian and 25 foreign EVOO samples were analyzed and their spectral data were processed through an artificial intelligence algorithm. The multivariate analysis of variance (MANOVA) results reported significant differences (p < 0.001) between the Italian and foreign EVOO VIS-NIR matrices. The artificial neural network (ANN) model with an external test showed a correct classification percentage equal to 94.6%. Both the MANOVA and ANN tested methods showed the most important spectral wavelengths ranges for origin determination to be 308-373 nm and 594-605 nm. These are related to the absorption of phenolic components, carotenoids, chlorophylls, and anthocyanins. The proposed tool allows the assessment of EVOO samples' origin and thus could help to preserve the "Made in Italy" from fraud and sophistication related to its commerce.

Violino Simona, Ortenzi Luciano, Antonucci Francesca, Pallottino Federico, Benincasa Cinzia, Figorilli Simone, Costa Corrado

2020-Jun-25

ANN, VIS-NIR, antioxidants, artificial intelligence AI, made in Italy, minor components, non-destructive techniques, pigments, ready-to-use, spectral signature

Public Health Public Health

Forecasting Weekly Influenza Outpatient Visits Using a Two-Dimensional Hierarchical Decision Tree Scheme.

In International journal of environmental research and public health ; h5-index 73.0

Influenza is a serious public health issue, as it can cause acute suffering and even death, social disruption, and economic loss. Effective forecasting of influenza outpatient visits is beneficial to anticipate and prevent medical resource shortages. This study uses regional data on influenza outpatient visits to propose a two-dimensional hierarchical decision tree scheme for forecasting influenza outpatient visits. The Taiwan weekly influenza outpatient visit data were collected from the national infectious disease statistics system and used for an empirical example. The 788 data points start in the first week of 2005 and end in the second week of 2020. The empirical results revealed that the proposed forecasting scheme outperformed five competing models and was able to forecast one to four weeks of anticipated influenza outpatient visits. The scheme may be an effective and promising alternative for forecasting one to four steps (weeks) ahead of nationwide influenza outpatient visits in Taiwan. Our results also suggest that, for forecasting nationwide influenza outpatient visits in Taiwan, one- and two-time lag information and regional information from the Taipei, North, and South regions are significant.

Lee Tian-Shyug, Chen I-Fei, Chang Ting-Jen, Lu Chi-Jie

2020-Jul-01

decision tree, forecasting, hierarchical structure, influenza outpatient visits, public health

Surgery Surgery

Deep Neural Networks for Dental Implant System Classification.

In Biomolecules

In this study, we used panoramic X-ray images to classify and clarify the accuracy of different dental implant brands via deep convolutional neural networks (CNNs) with transfer-learning strategies. For objective labeling, 8859 implant images of 11 implant systems were used from digital panoramic radiographs obtained from patients who underwent dental implant treatment at Kagawa Prefectural Central Hospital, Japan, between 2005 and 2019. Five deep CNN models (specifically, a basic CNN with three convolutional layers, VGG16 and VGG19 transfer-learning models, and finely tuned VGG16 and VGG19) were evaluated for implant classification. Among the five models, the finely tuned VGG16 model exhibited the highest implant classification performance. The finely tuned VGG19 was second best, followed by the normal transfer-learning VGG16. We confirmed that the finely tuned VGG16 and VGG19 CNNs could accurately classify dental implant systems from 11 types of panoramic X-ray images.

Sukegawa Shintaro, Yoshii Kazumasa, Hara Takeshi, Yamashita Katsusuke, Nakano Keisuke, Yamamoto Norio, Nagatsuka Hitoshi, Furuki Yoshihiko

2020-Jul-01

artificial intelligence, classification, convolutional neural networks, deep learning, dental implant

Pathology Pathology

AI-PLAX: AI-based placental assessment and examination using photos.

In Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society

Post-delivery analysis of the placenta is useful for evaluating health risks of both the mother and baby. In the U.S., however, only about 20% of placentas are assessed by pathology exams, and placental data is often missed in pregnancy research because of the additional time, cost, and expertise needed. A computer-based tool that can be used in any delivery setting at the time of birth to provide an immediate and comprehensive placental assessment would have the potential to not only to improve health care, but also to radically improve medical knowledge. In this paper, we tackle the problem of automatic placental assessment and examination using photos. More concretely, we first address morphological characterization, which includes the tasks of placental image segmentation, umbilical cord insertion point localization, and maternal/fetal side classification. We also tackle clinically meaningful feature analysis of placentas, which comprises detection of retained placenta (i.e., incomplete placenta), umbilical cord knot, meconium, abruption, chorioamnionitis, and hypercoiled cord, and categorization of umbilical cord insertion type. We curated a dataset consisting of approximately 1300 placenta images taken at Northwestern Memorial Hospital, with hand-labeled pixel-level segmentation map, cord insertion point and other information extracted from the associated pathology reports. We developed the AI-based Placental Assessment and Examination system (AI-PLAX), which is a novel two-stage photograph-based pipeline for fully automated analysis. In the first stage, we use three encoder-decoder convolutional neural networks with a shared encoder to address morphological characterization tasks by employing a transfer-learning training strategy. In the second stage, we employ distinct sub-models to solve different feature analysis tasks by using both the photograph and the output of the first stage. We evaluated the effectiveness of our pipeline by using the curated dataset as well as the pathology reports in the medical record. Through extensive experiments, we demonstrate our system is able to produce accurate morphological characterization and very promising performance on aforementioned feature analysis tasks, all of which may possess clinical impact and contribute to future pregnancy research. This work is the first for comprehensive, automated, computer-based placental analysis and will serve as a launchpad for potentially multiple future innovations.

Chen Yukun, Zhang Zhuomin, Wu Chenyan, Davaasuren Dolzodmaa, Goldstein Jeffery A, Gernand Alison D, Wang James Z

2020-Jun-01

Deep learning, Pathology, Photo image analysis, Placenta, Transfer learning

General General

Application of Artificial Intelligence in COVID-19 drug repurposing.

In Diabetes & metabolic syndrome

BACKGROUND AND AIM : COVID-19 outbreak has created havoc and a quick cure for the disease will be a therapeutic medicine that has usage history in patients to resolve the current pandemic. With technological advancements in Artificial Intelligence (AI) coupled with increased computational power, the AI-empowered drug repurposing can prove beneficial in the COVID-19 scenario.

METHODS : The recent literature is studied and analyzed from various sources such as Scopus, Google Scholar, PubMed, and IEEE Xplore databases. The search terms used are 'COVID-19', ' AI ', and 'Drug Repurposing'.

RESULTS : AI is implemented in the field design through the generation of the learning-prediction model and performs a quick virtual screening to accurately display the output. With a drug-repositioning strategy, AI can quickly detect drugs that can fight against emerging diseases such as COVID-19. This technology has the potential to improve the drug discovery, planning, treatment, and reported outcomes of the COVID-19 patient, being an evidence-based medical tool.

CONCLUSIONS : Thus, there are chances that the application of the AI approach in drug discovery is feasible. With prior usage experiences in patients, few of the old drugs, if shown active against SARS-CoV-2, can be readily applied to treat the COVID-19 patients. With the collaboration of AI with pharmacology, the efficiency of drug repurposing can improve significantly.

Mohanty Sweta, Harun Ai Rashid Md, Mridul Mayank, Mohanty Chandana, Swayamsiddha Swati

2020-Jul-03

Artificial intelligence, COVID-19, Coronavirus, Deep learning, Drug repositioning, Drug repurposing, Machine learning

General General

AVNet: A retinal artery/vein classification network with category-attention weighted fusion.

In Computer methods and programs in biomedicine

BACKGROUND AND OBJECTIVE : Automatic artery/vein (A/V) classification in retinal images is of great importance in detecting vascular abnormalities, which may provide biomarkers for early diagnosis of many systemic diseases. It is intuitive to apply popular deep semantic segmentation network for A/V classification. However, the model is required to provide powerful representation ability since vessel is much more complex than general objects. Moreover, deep network may lead to inconsistent classification results for the same vessel due to the lack of structured optimization objective.

METHODS : In this paper, we propose a novel segmentation network named AVNet, which effectively enhances the classification ability of the model by integrating category-attention weighted fusion (CWF) module, significantly improving the pixel-level A/V classification results. Then, a graph based vascular structure reconstruction (VSR) algorithm is employed to reduce the segment-wise inconsistency, verifying the effect of the graph model on noisy vessel segmentation results.

RESULTS : The proposed method has been verified on three datasets, i.e. DRIVE, LES-AV and WIDE. AVNet achieves pixel-level accuracies of 90.62%, 90.34%, and 93.16%, respectively, and VSR further improves the performance by 0.19%, 1.85% and 0.64%, achieving the state-of-the-art results on these three datasets.

CONCLUSION : The proposed method achieves competitive performance in A/V classification task.

Kang Hong, Gao Yingqi, Guo Song, Xu Xia, Li Tao, Wang Kai

2020-Jun-25

Artery/vein classification, Deep learning, Graph model, Retinal images

Radiology Radiology

Functional Imaging using Radiomic Features in Assessment of Lymphoma.

In Methods (San Diego, Calif.)

Lymphomas are typically large, well-defined, and relatively homogeneous tumors, and therefore represent ideal targets for the use of radiomics. Of the available functional imaging tests, [18F]FDG-PET for body lymphoma and diffusion-weighted MRI (DWI) for central nervous system (CNS) lymphoma are of particular interest. The current literature suggests that two main applications for radiomics in lymphoma show promise: differentiation of lymphomas from other tumors, and lymphoma treatment response and outcome prognostication. In particular, encouraging results reported in the limited number of presently available studies that utilize functional imaging suggest that (1) MRI-based radiomics enables differentiation of CNS lymphoma from glioblastoma, and (2) baseline [18F]FDG-PET radiomics could be useful for survival prognostication, adding to or even replacing commonly used metrics such as standardized uptake values and metabolic tumor volume. However, due to differences in biological and clinical characteristics of different lymphoma subtypes and an increasing number of treatment options, more data are required to support these findings. Furthermore, a consensus on several critical steps in the radiomics workflow -most importantly, image reconstruction and post processing, lesion segmentation, and choice of classification algorithm- is desirable to ensure comparability of results between research institutions.

Mayerhoefer Marius E, Umutlu Lale, Schöder Heiko

2020-Jul-04

Artificial intelligence, Lymphoma, Magnetic resonance imaging, Positron emission tomography, Radiomics

Radiology Radiology

Detecting caries lesions of different radiographic extension on bitewings using deep learning.

In Journal of dentistry ; h5-index 59.0

OBJECTIVES : We aimed to apply deep learning to detect caries lesions of different radiographic extension on bitewings, hypothesizing it to be significantly more accurate than individual dentists.

METHODS : 3,686 bitewing radiographs were assessed by four experienced dentists. Caries lesions were marked in a pixelwise fashion. The union of all pixels was defined as reference test. The data was divided into a training (3,293), validation (252) and test dataset (141). We applied a convolutional neural network (U-Net) and used the Intersection-over-Union as validation metric. The performance of the trained neural network on the test dataset was compared against that of seven independent using tooth-level accuracy metrics. Stratification according to lesion depth (enamel lesions E1/2, dentin lesions into middle or inner third D2/3) was applied.

RESULTS : The neural network showed an accuracy of 0.80; dentists' mean accuracy was significantly lower at 0.71 (min-max: 0.61-0.78, p < 0.05). The neural network was significantly more sensitive than dentists (0.75 versus 0.36 (0.19-0.65; p = 0.006), while its specificity was not significantly lower (0.83) than those of the dentists (0.91 (0.69-0.98; p > 0.05); p > 0.05). The neural network showed robust sensitivities at or above 0.70 for both initial and advanced lesions. Dentists largely showed low sensitivities for initial lesions (all except one dentist showed sensitivities below 0.25), while those for advanced ones were between 0.40 and 0.75.

CONCLUSIONS : To detect caries lesions on bitewing radiographs, a deep neural network was significantly more accurate than dentists.

CLINICAL SIGNIFICANCE : Deep learning may assist dentists to detect especially initial caries lesions on bitewings. The impact of using such models on decision-making should be explored.

Cantu Anselmo Garcia, Gehrung Sascha, Krois Joachim, Chaurasia Akhilanand, Rossi Jesus Gomez, Gaudin Robert, Elhennawy Karim, Schwendicke Falk

2020-Jul-04

Artificial Intelligence, Caries, Digital imaging/radiology, Mathematical modeling, Radiography

oncology Oncology

A deep learning MR-based radiomic nomogram may predict survival for nasopharyngeal carcinoma patients with stage T3N1M0.

In Radiotherapy and oncology : journal of the European Society for Therapeutic Radiology and Oncology

PURPOSE : To estimate the prognostic value of deep learning (DL) magnetic resonance (MR)-based radiomics for stage T3N1M0 nasopharyngeal carcinoma (NPC) patients receiving induction chemotherapy (ICT) prior to concurrent chemoradiotherapy (CCRT).

METHODS : A total of 638 stage T3N1M0 NPC patients (training cohort: n = 447; test cohort: n = 191) were enrolled and underwent MRI scans before receiving ICT+CCRT. From the pretreatment MR images, DL-based radiomic signatures were developed to predict disease-free survival (DFS) in an end-to-end way. Incorporating independent clinical prognostic parameters and radiomic signatures, a radiomic nomogram was built through multivariable Cox proportional hazards method. The discriminative performance of the radiomic nomogram was assessed using the concordance index (C-index) and the Kaplan-Meier estimator.

RESULTS : Three DL-based radiomic signatures were significantly correlated with DFS in the training (C-index: 0.695-0.731, all p < 0.001) and test (C-index: 0.706-0.755, all p < 0.001) cohorts. Integrating radiomic signatures with clinical factors significantly improved the predictive value compared to the clinical model in the training (C-index: 0.771 vs. 0.640, p < 0.001) and test (C-index: 0.788 vs. 0.625, p = 0.001) cohorts. Furthermore, risk stratification using the radiomic nomogram demonstrated that the high-risk group exhibited short-lived DFS compared to the low-risk group in the training cohort (hazard ratio [HR]: 6.12, p < 0.001), which was validated in the test cohort (HR: 6.90, p < 0.001).

CONCLUSIONS : Our DL-based radiomic nomogram may serve as a noninvasive and useful tool for pretreatment prognostic prediction and risk stratification in stage T3N1M0 NPC.

Zhong Lian-Zhen, Fang Xue-Liang, Dong Di, Peng Hao, Fang Meng-Jie, Huang Cheng-Long, He Bing-Xi, Lin Li, Ma Jun, Tang Ling-Long, Tian Jie

2020-Jul-04

Deep learning, Induction chemotherapy, MRI-based treatment planning, Nasopharyngeal cancer, Survival analysis

General General

Update on therapeutic approaches and emerging therapies for SARS-CoV-2 virus.

In European journal of pharmacology ; h5-index 57.0

The global pandemic of coronavirus disease 2019 (COVID-19), caused by novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has resulted in over 7,273,958 cases with almost over 413,372 deaths worldwide as per the WHO situational report 143 on COVID-19. There are no known treatment regimens with proven efficacy and vaccines thus far, posing an unprecedented challenge to identify effective drugs and vaccines for prevention and treatment. The urgency for its prevention and cure has resulted in an increased number of proposed treatment options. The high rate and volume of emerging clinical trials on therapies for COVID-19 need to be compared and evaluated to provide scientific evidence for effective medical options. Other emerging non-conventional drug discovery techniques such as bioinformatics and cheminformatics, structure-based drug design, network-based methods for prediction of drug-target interactions, artificial intelligence (AI) and machine learning (ML) and phage technique could provide alternative routes to discovering potent Anti-SARS-CoV2 drugs. While drugs are being repurposed and discovered for COVID-19, novel drug delivery systems will be paramount for efficient delivery and avoidance of possible drug resistance. This review describes the proposed drug targets for therapy, and outcomes of clinical trials that have been reported. It also identifies the adopted treatment modalities that are showing promise, and those that have failed as drug candidates. It further highlights various emerging therapies and future strategies for the treatment of COVID-19 and delivery of Anti-SARS-CoV2 drugs.

Omolo Calvin A, Soni Nikki, Fasiku Victoria Oluwaseun, Mackraj Irene, Govender Thirumala

2020-Jul-04

COVID-19, Clinical trials, Drug targets, Re-purposing, SARS-CoV2, Vaccines

General General

The Psychopathology and Neuroanatomical Markers of Depression in Early Psychosis.

In Schizophrenia bulletin ; h5-index 79.0

Depression frequently occurs in first-episode psychosis (FEP) and predicts longer-term negative outcomes. It is possible that this depression is seen primarily in a distinct subgroup, which if identified could allow targeted treatments. We hypothesize that patients with recent-onset psychosis (ROP) and comorbid depression would be identifiable by symptoms and neuroanatomical features similar to those seen in recent-onset depression (ROD). Data were extracted from the multisite PRONIA study: 154 ROP patients (FEP within 3 months of treatment onset), of whom 83 were depressed (ROP+D) and 71 who were not depressed (ROP-D), 146 ROD patients, and 265 healthy controls (HC). Analyses included a (1) principal component analysis that established the similar symptom structure of depression in ROD and ROP+D, (2) supervised machine learning (ML) classification with repeated nested cross-validation based on depressive symptoms separating ROD vs ROP+D, which achieved a balanced accuracy (BAC) of 51%, and (3) neuroanatomical ML-based classification, using regions of interest generated from ROD subjects, which identified BAC of 50% (no better than chance) for separation of ROP+D vs ROP-D. We conclude that depression at a symptom level is broadly similar with or without psychosis status in recent-onset disorders; however, this is not driven by a separable depressed subgroup in FEP. Depression may be intrinsic to early stages of psychotic disorder, and thus treating depression could produce widespread benefit.

Upthegrove Rachel, Lalousis Paris, Mallikarjun Pavan, Chisholm Katharine, Griffiths Sian Lowri, Iqbal Mariam, Pelton Mirabel, Reniers Renate, Stainton Alexandra, Rosen Marlene, Ruef Anne, Dwyer Dominic B, Surman Marian, Haidl Theresa, Penzel Nora, Kambeitz-Llankovic Lana, Bertolino Alessandro, Brambilla Paolo, Borgwardt Stefan, Kambeitz Joseph, Lencer Rebekka, Pantelis Christos, Ruhrmann Stephan, Schultze-Lutter Frauke, Salokangas Raimo K R, Meisenzahl Eva, Wood Stephen J, Koutsouleris Nikolaos

2020-Jul-07

depression, gray matter volume, machine learning, psychopathology, psychosis, schizophrenia

General General

Nanoparticle Recognition on Scanning Probe Microscopy Images Using Computer Vision and Deep Learning.

In Nanomaterials (Basel, Switzerland)

Identifying, counting and measuring particles is an important component of many research studies. Images with particles are usually processed by hand using a software ruler. Automated processing, based on conventional image processing methods (edge detection, segmentation, etc.) are not universal, can only be used on good-quality images and need to set a number of parameters empirically. In this paper, we present results from the application of deep learning to automated recognition of metal nanoparticles deposited on highly oriented pyrolytic graphite on images obtained by scanning tunneling microscopy (STM). We used the Cascade Mask-RCNN neural network. Training was performed on a dataset containing 23 STM images with 5157 nanoparticles. Three images containing 695 nanoparticles were used for verification. As a result, the trained neural network recognized nanoparticles in the verification set with 0.93 precision and 0.78 recall. Predicted contour refining with 2D Gaussian function was a proposed option. The accuracies for mean particle size calculated from predicted contours compared with ground truth were in the range of 0.87-0.99. The results were compared with outcomes from other generally available software, based on conventional image processing methods. The advantages of deep learning methods for automatic particle recognition were clearly demonstrated. We developed a free open-access web service "ParticlesNN" based on the trained neural network, which can be used by any researcher in the world.

Okunev Alexey G, Mashukov Mikhail Yu, Nartova Anna V, Matveev Andrey V

2020-Jun-30

deep neural networks, particle recognition, particles, scanning tunneling microscopy

General General

Analysis of the role and robustness of artificial intelligence in commodity image recognition under deep learning neural network.

In PloS one ; h5-index 176.0

In order to explore the application of the image recognition model based on multi-stage convolutional neural network (MS-CNN) in the deep learning neural network in the intelligent recognition of commodity images and the recognition performance of the method, in the study, the features of color, shape, and texture of commodity images are first analyzed, and the basic structure of deep convolutional neural network (CNN) model is analyzed. Then, 50,000 pictures containing different commodities are constructed to verify the recognition effect of the model. Finally, the MS-CNN model is taken as the research object for improvement to explore the influence of label errors (p = 0.03, 0.05, 0.07, 0.09, 0.12) with different parameter settings and different probabilities (size of convolutional kernel, Dropout rate) on the recognition accuracy of MS-CNN model, at the same time, a CIR system platform based on MS-CNN model is built, and the recognition performance of salt and pepper noise images with different SNR (0, 0.03, 0.05, 0.07, 0.1) was compared, then the performance of the algorithm in the actual image recognition test was compared. The results show that the recognition accuracy is the highest (97.8%) when the convolution kernel size in the MS-CNN model is 2*2 and 3*3, and the average recognition accuracy is the highest (97.8%) when the dropout rate is 0.1; when the error probability of picture label is 12%, the recognition accuracy of the model constructed in this study is above 96%. Finally, the commodity image database constructed in this study is used to identify and verify the model. The recognition accuracy of the algorithm in this study is significantly higher than that of the Minitch stochastic gradient descent algorithm under different SNR conditions, and the recognition accuracy is the highest when SNR = 0 (99.3%). The test results show that the model proposed in this study has good recognition effect in the identification of commodity images in scenes of local occlusion, different perspectives, different backgrounds, and different light intensity, and the recognition accuracy is 97.1%. To sum up, the CIR platform based on MS-CNN model constructed in this study has high recognition accuracy and robustness, which can lay a foundation for the realization of subsequent intelligent commodity recognition technology.

Chen Rui, Wang Meiling, Lai Yi

2020

General General

An Improved Performance of Deep Learning Based on Convolution Neural Network to Classify the Hand Motion by Evaluating Hyper Parameter.

In IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society

High accuracy in pattern recognition based on electromyography(EMG) contributes to the effectiveness of prosthetics hand development. This study aimed to improve performance and simplify the deep learning pre-processing based on the convolution neural network (CNN) algorithm for classifying ten hand motion from two raw EMG signals. The main contribution of this study is the simplicity of pre-processing stage in classifier machine. For instance, the feature extraction process is not required. Furthermore, the performance of the classifier was improved by evaluating the best hyperparameter in deep learning architecture. To validate the performance of deep learning, the public dataset from ten subjects was evaluated. The performance of the proposed method was compared to other conventional machine learning, specifically LDA, SVM, and KNN. The CNN can discriminate the ten hand-motion based on raw EMG signal without handcrafts feature extraction. The results of the evaluation showed that CNN outperformed other classifiers. The average accuracy for all motion ranges between 0.77 and 0.93. The statistical t-test between using two-channel(CH1 and CH2) and single-channel(CH2) shows that there is no significant difference in accuracy with p-value >0.05. The proposed method was useful in the study of prosthetic hand, which required the simple architecture of machine learning and high performance in the classification.

Triwiyanto Triwiyanto, Pawana I Putu Alit, Purnomo Mauridhi Hery

2020-Jul

General General

Learning, Generalization, and Scalability of Abstract Myoelectric Control.

In IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society

Motor learning-based methods offer an alternative paradigm to machine learning-based methods for controlling upper-limb prosthetics. Within this paradigm, the patterns of muscular activity used for control can differ from those which control biological limbs. Practice expedites the learning of these new, functional patterns of muscular activity. We envisage that these methods can result in enhanced control without increasing device complexity. However, key questions about training protocols, generalisation and scalability of motor learning-based methods have remained. In this work, we pursue three objectives: 1) to validate the motor learning-based abstract myoelectric control approach with people with upper-limb difference for the first time; 2) to test whether, after training, participants can generalize their learning to tasks of increased difficulty; and 3) to show that abstract myoelectric control scales with additional input signals, offering a larger control range. In three experiments, 25 limb-intact participants and 8 people with a limb difference (congenital and acquired) experienced a motor learning-based myoelectric controlled interface. We show that participants with upper-limb difference can learn to control the interface and that performance increases with experience. Across experiments, participant performance on easier lower target density tasks generalized to more difficult higher target density tasks. A proof-of-concept study demonstrates that learning-based control scales with additional myoelectric channels. Our results show that human motor learning-based approaches can enhance the number of distinct outputs from the musculature, thereby increasing the functionality of prosthetic hands and providing a viable alternative to machine learning.

Dyson Matthew, Dupan Sigrid, Jones Hannah, Nazarpour Kianoush

2020-Jul

General General

Young's Modulus and Tensile Strength of Ti3C2 MXene Nanosheets as Revealed by in situ TEM Probing, AFM Nanomechanical Mapping and Theoretical Calculations.

In Nano letters ; h5-index 188.0

Two-dimensional transition metal carbides, i.e. MXenes, and especially Ti3C2, attract attention due to their excellent combination of properties. Ti3C2 nanosheets could be the material of choice for future flexible electronics, energy storage and electromechanical nanodevices. There has been limited information available on the mechanical properties of Ti3C2, which is essential for their utilization. We have fabricated Ti3C2 nanosheets and studied their mechanical properties using direct in situ tensile tests inside a transmission electron microscope, quantitative nanomechanical mapping and theoretical calculations employing machine-learning derived potentials. Young's modulus in the direction perpendicular to the Ti3C2 basal plane was found to be 80-100 GPa. The tensile strength of Ti3C2 nanosheets reached up to 670 MPa for ~40 nm thin nanoflakes, while a strong dependence of tensile strength on nanosheet thickness was demonstrated. Theoretical calculations allowed us to study mechanical characteristics of Ti3C2 as a function of nanosheet geometrical parameters and structural defects concentration.

Firestein Konstantin L, von Treifeldt Joel E, Kvashnin Dmitry G, Fernando Joseph F S, Zhang Chao, Kvashnin Alexander G, Podryabinkin Evgeny V, Shapeev Alexander V, Siriwardena Dumindu P, Sorokin Pavel B, Golberg Dmitri

2020-Jul-07

General General

A Temperature Sensor with A Water-Dissolvable Ionic Gel for Ionic Skin.

In ACS applied materials & interfaces ; h5-index 147.0

In the era of trillion sensors, a tremendous number of sensors will be consumed to collect information for big data analysis. Once they are installed in harsh environment or implanted in a human/animal body, we cannot easily retrieve the sensors; the sensors for these applications are left unattended but expected to decay after use. In this paper, A disposable temperature sensor that disappears by contact with water is reported. The gel electrolyte based on an ionic liquid and a water-soluble polymer, so called ionic gel, exhibits the Young's modulus of 96 kPa, which is compatible with human muscle, skin, and organs, as wearable devices or soft robotics. A study on electrical characteristics of the sensor with various temperature reveals that the ionic conductivity and capacitance increased by 12 times and 4.8 times, respectively, when the temperature varies from 30˚C to 80˚C. The temperature sensor exhibits a short response time of 1.4 s, allowing real-time monitoring of temperature change. Furthermore, sensors in an array format can obtain the spatial distribution of temperature. The developed sensor was found to fully dissolves in water in 16 hours. The water-dissolvability enable practical applications including healthcare, artificial intelligence, and environmental sensing.

Yamada Shunsuke, Toshiyoshi Hiroshi

2020-Jul-07

General General

Novel prognostic prediction model constructed through machine learning on the basis of methylation-driven genes in kidney renal clear cell carcinoma.

In Bioscience reports

Kidney renal clear cell carcinoma (KIRC) is a common tumor with poor prognosis and is closely related to many aberrant gene expressions. DNA methylation is an important epigenetic modification mechanism and a novel research target. Thus, exploring the relationship between methylation-driven genes and KIRC prognosis is important. The methylation profile, methylation-driven genes, and methylation characteristics in KIRC was revealed through the integration of KIRC methylation, RNA-seq, and clinical information data from The Cancer Genome Atlas. The Lasso regression was used to establish a prognosis model on the basis of methylation-driven genes. Then, a trans-omics prognostic nomogram was constructed and evaluated by combining clinical information and methylated prognosis model. A total of 242 methylation-driven genes were identified. The Gene Ontology terms of these methylation-driven genes mainly clustered in the activation, adhesion, and proliferation of immune cells. The methylation prognosis prediction model that was established using the Lasso regression included four genes in the methylation data, namely, FOXI2, USP44, EVI2A, and TRIP13. The areas under the receiver operating characteristic curve of 1-, 3-, and 5-year survival rates were 0.810, 0.824, and 0.799, respectively, in the training group and 0.794, 0.752, and 0.731, respectively, in the testing group. An easy trans-omics nomogram was successfully established. The C-indices of the nomogram in the training and the testing groups were 0.8015 and 0.8389, respectively. This study revealed the overall perspective of methylation-driven genes in KIRC and can help in the evaluation of the prognosis of KIRC patients and provide new clues for further study.

Tang Weihao, Cao Yiling, Ma Xiaoke

2020-Jul-07

KIRC, methylation, methylation-driven gene, prognostic prediction model

Surgery Surgery

Design of an integrated model for diagnosis and classification of pediatric acute leukemia using machine learning.

In Proceedings of the Institution of Mechanical Engineers. Part H, Journal of engineering in medicine

Applying artificial intelligence techniques for diagnosing diseases in hospitals often provides advanced medical services to patients such as the diagnosis of leukemia. On the other hand, surgery and bone marrow sampling, especially in the diagnosis of childhood leukemia, are even more complex and difficult, resulting in increased human error and procedure time decreased patient satisfaction and increased costs. This study investigates the use of neuro-fuzzy and group method of data handling, for the diagnosis of acute leukemia in children based on the complete blood count test. Furthermore, a principal component analysis is applied to increase the accuracy of the diagnosis. The results show that distinguishing between patient and non-patient individuals can easily be done with adaptive neuro-fuzzy inference system, whereas for classifying between the types of diseases themselves, more pre-processing operations such as reduction of features may be needed. The proposed approach may help to distinguish between two types of leukemia including acute lymphoblastic leukemia and acute myeloid leukemia. Based on the sensitivity of the diagnosis, experts can use the proposed algorithm to help identify the disease earlier and lessen the cost.

Fathi Ehsan, Rezaee Mustafa Jahangoshai, Tavakkoli-Moghaddam Reza, Alizadeh Azra, Montazer Aynaz

2020-Jul-07

Cancer classification, diagnosis of acute lymphoblastic and myeloid leukemia, group method of data handling, neuro-fuzzy inference system

General General

Probing the characteristics and biofunctional effects of disease-affected cells and drug response via machine learning applications.

In Critical reviews in biotechnology

Drug-induced transformations in disease characteristics at the cellular and molecular level offers the opportunity to predict and evaluate the efficacy of pharmaceutical ingredients whilst enabling the optimal design of new and improved drugs with enhanced pharmacokinetics and pharmacodynamics. Machine learning is a promising in-silico tool used to simulate cells with specific disease properties and to determine their response toward drug uptake. Differences in the properties of normal and infected cells, including biophysical, biochemical and physiological characteristics, plays a key role in developing fundamental cellular probing platforms for machine learning applications. Cellular features can be extracted periodically from both the drug treated, infected, and normal cells via image segmentations in order to probe dynamic differences in cell behavior. Cellular segmentation can be evaluated to reflect the levels of drug effect on a distinct cell or group of cells via probability scoring. This article provides an account for the use of machine learning methods to probe differences in the biophysical, biochemical and physiological characteristics of infected cells in response to pharmacokinetics uptake of drug ingredients for application in cancer, diabetes and neurodegenerative disease therapies.

Mudali Deborah, Jeevanandam Jaison, Danquah Michael K

2020-Jul-07

Machine learning, cellular properties, drug uptake, image segmentation, principal component analysis

Radiology Radiology

Deep learning based detection of intracranial aneurysms on digital subtraction angiography: A feasibility study.

In The neuroradiology journal

BACKGROUND : Digital subtraction angiography is the gold standard for detecting and characterising aneurysms. Here, we assess the feasibility of commercial-grade deep learning software for the detection of intracranial aneurysms on whole-brain anteroposterior and lateral 2D digital subtraction angiography images.

MATERIAL AND METHODS : Seven hundred and six digital subtraction angiography images were included from a cohort of 240 patients (157 female, mean age 59 years, range 20-92; 83 male, mean age 55 years, range 19-83). Three hundred and thirty-five (47%) single frame anteroposterior and lateral images of a digital subtraction angiography series of 187 aneurysms (41 ruptured, 146 unruptured; average size 7±5.3 mm, range 1-5 mm; total 372 depicted aneurysms) and 371 (53%) aneurysm-negative study images were retrospectively analysed regarding the presence of intracranial aneurysms. The 2D data was split into testing and training sets in a ratio of 4:1 with 3D rotational digital subtraction angiography as gold standard. Supervised deep learning was performed using commercial-grade machine learning software (Cognex, ViDi Suite 2.0). Monte Carlo cross validation was performed.

RESULTS : Intracranial aneurysms were detected with a sensitivity of 79%, a specificity of 79%, a precision of 0.75, a F1 score of 0.77, and a mean area-under-the-curve of 0.76 (range 0.68-0.86) after Monte Carlo cross-validation, run 45 times.

CONCLUSION : The commercial-grade deep learning software allows for detection of intracranial aneurysms on whole-brain, 2D anteroposterior and lateral digital subtraction angiography images, with results being comparable to more specifically engineered deep learning techniques.

Hainc Nicolin, Mannil Manoj, Anagnostakou Vaia, Alkadhi Hatem, Blüthgen Christian, Wacht Lorenz, Bink Andrea, Husain Shakir, Kulcsár Zsolt, Winklhofer Sebastian

2020-Jul-07

Central nervous system, aneurysms, interventional

General General

Fuzzy Inspired Deep Belief Network for the Traffic Flow Prediction in Intelligent Transportation System Using Flow Strength Indicators.

In Big data

Intelligent transportation system (ITS) is an advance leading edge technology that aims to deliver innovative services to different modes of transport and traffic management. Traffic flow prediction (TFP) is one of the key macroscopic parameters of traffic that supports traffic management in ITS. Growth of the real-time data in transportation from various modern equipments, technology, and other resources has led to generate big data, posing a huge concern to deal with. Recently, deep learning (DL) techniques have demonstrated the capability to extract comprehensive features efficiently, using multiple hidden layers, from such huge raw, unstructured, and nonlinear data. Nonlinearity in traffic data is the major cause of inaccuracy in TFP. In this article, we propose a flow strength indicator-based Chronological Dolphin Echolocation-Fuzzy, a bioinspired optimization method with fuzzy logic for incremental learning of deep belief network. Technical indicators provide flow strength features as an input to the model. Hidden layers of DL architecture consequently learn more features and propagate it as an input to next layer for supervised learning. The degree of membership to the features is identified by the membership functions, followed by weight optimization using Dolphin Echolocation algorithm to fit the model for the nonlinear data. Experiments performed on two different data sets, namely Traffic-major roads and performance measurement system-San Francisco (PEMS-SF), show good results for the proposed deep architecture. The analysis of the proposed method using log mean square error and log root mean square deviation acquires a minimum value of 2.4141 and 0.61 for the Traffic-major roads database taken for the time step duration of 1 year and a minimum value of 1.6691 and 0.5208 for PEMS-SF data set for the time step interval of 5 minutes, respectively. These positive results demonstrate key importance of our traffic flow model for the transportation system.

George Shiju, Santra Ajit Kumar

2020-Jul-06

Dolphin Echolocation algorithm, chronological concept, deep belief network, flow strength indicators, fuzzy theory, intelligent transportation system, traffic flow prediction

Pathology Pathology

Bioinformatics Pipeline for Human Papillomavirus Short Read Genomic Sequences Classification Using Support Vector Machine.

In Viruses ; h5-index 58.0

We recently developed a test based on the Agilent SureSelect target enrichment system capturing genomic fragments from 191 human papillomaviruses (HPV) types for Illumina sequencing. This enriched whole genome sequencing (eWGS) assay provides an approach to identify all HPV types in a sample. Here we present a machine learning algorithm that calls HPV types based on the eWGS output. The algorithm based on the support vector machine (SVM) technique was trained on eWGS data from 122 control samples with known HPV types. The new algorithm demonstrated good performance in HPV type detection for designed samples with 25 or greater HPV plasmid copies per sample. We compared the results of HPV typing made by the new algorithm for 261 residual epidemiologic samples with the results of the typing delivered by the standard HPV Linear Array (LA). The agreement between methods (97.4%) was substantial (kappa= 0.783). However, the new algorithm identified additionally 428 instances of HPV types not detectable by the LA assay by design. Overall, we have demonstrated that the bioinformatics pipeline is an accurate tool for calling HPV types by analyzing data generated by eWGS processing of DNA fragments extracted from control and epidemiological samples.

Lomsadze Alexandre, Li Tengguo, Rajeevan Mangalathu S, Unger Elizabeth R, Borodovsky Mark

2020-Jun-30

HPV typing, HPV whole genome sequencing, bioinformatics pipeline, h classification, target enrichment

General General

Application of Artificial Intelligence in COVID-19 drug repurposing.

In Diabetes & metabolic syndrome

BACKGROUND AND AIM : COVID-19 outbreak has created havoc and a quick cure for the disease will be a therapeutic medicine that has usage history in patients to resolve the current pandemic. With technological advancements in Artificial Intelligence (AI) coupled with increased computational power, the AI-empowered drug repurposing can prove beneficial in the COVID-19 scenario.

METHODS : The recent literature is studied and analyzed from various sources such as Scopus, Google Scholar, PubMed, and IEEE Xplore databases. The search terms used are 'COVID-19', ' AI ', and 'Drug Repurposing'.

RESULTS : AI is implemented in the field design through the generation of the learning-prediction model and performs a quick virtual screening to accurately display the output. With a drug-repositioning strategy, AI can quickly detect drugs that can fight against emerging diseases such as COVID-19. This technology has the potential to improve the drug discovery, planning, treatment, and reported outcomes of the COVID-19 patient, being an evidence-based medical tool.

CONCLUSIONS : Thus, there are chances that the application of the AI approach in drug discovery is feasible. With prior usage experiences in patients, few of the old drugs, if shown active against SARS-CoV-2, can be readily applied to treat the COVID-19 patients. With the collaboration of AI with pharmacology, the efficiency of drug repurposing can improve significantly.

Mohanty Sweta, Harun Ai Rashid Md, Mridul Mayank, Mohanty Chandana, Swayamsiddha Swati

2020-Jul-03

Artificial intelligence, COVID-19, Coronavirus, Deep learning, Drug repositioning, Drug repurposing, Machine learning

General General

MCU-Net: A framework towards uncertainty representations for decision support system patient referrals in healthcare contexts

ArXiv Preprint

Incorporating a human-in-the-loop system when deploying automated decision support is critical in healthcare contexts to create trust, as well as provide reliable performance on a patient-to-patient basis. Deep learning methods while having high performance, do not allow for this patient-centered approach due to the lack of uncertainty representation. Thus, we present a framework of uncertainty representation evaluated for medical image segmentation, using MCU-Net which combines a U-Net with Monte Carlo Dropout, evaluated with four different uncertainty metrics. The framework augments this by adding a human-in-the-loop aspect based on an uncertainty threshold for automated referral of uncertain cases to a medical professional. We demonstrate that MCU-Net combined with epistemic uncertainty and an uncertainty threshold tuned for this application maximizes automated performance on an individual patient level, yet refers truly uncertain cases. This is a step towards uncertainty representations when deploying machine learning based decision support in healthcare settings.

Nabeel Seedat

2020-07-08

General General

Predicting microRNA-disease associations from lncRNA-microRNA interactions via Multiview Multitask Learning.

In Briefings in bioinformatics

MOTIVATION : Identifying microRNAs that are associated with different diseases as biomarkers is a problem of great medical significance. Existing computational methods for uncovering such microRNA-diseases associations (MDAs) are mostly developed under the assumption that similar microRNAs tend to associate with similar diseases. Since such an assumption is not always valid, these methods may not always be applicable to all kinds of MDAs. Considering that the relationship between long noncoding RNA (lncRNA) and different diseases and the co-regulation relationships between the biological functions of lncRNA and microRNA have been established, we propose here a multiview multitask method to make use of the known lncRNA-microRNA interaction to predict MDAs on a large scale. The investigation is performed in the absence of complete information of microRNAs and any similarity measurement for it and to the best knowledge, the work represents the first ever attempt to discover MDAs based on lncRNA-microRNA interactions.

RESULTS : In this paper, we propose to develop a deep learning model called MVMTMDA that can create a multiview representation of microRNAs. The model is trained based on an end-to-end multitasking approach to machine learning so that, based on it, missing data in the side information can be determined automatically. Experimental results show that the proposed model yields an average area under ROC curve of 0.8410+/-0.018, 0.8512+/-0.012 and 0.8521+/-0.008 when k is set to 2, 5 and 10, respectively. In addition, we also propose here a statistical approach to predicting lncRNA-disease associations based on these associations and the MDA discovered using MVMTMDA.

AVAILABILITY : Python code and the datasets used in our studies are made available at https://github.com/yahuang1991polyu/MVMTMDA/.

Huang Yu-An, Chan Keith C C, You Zhu-Hong, Hu Pengwei, Wang Lei, Huang Zhi-An

2020-Jul-07

lncRNA–microRNA interaction, microRNA-disease association, multiview multitask learning

General General

[New Trends in Breast Imaging].

In Therapeutische Umschau. Revue therapeutique

New Trends in Breast Imaging Abstract. The examination of the breast, especially as a screening examination for breast cancer, has so far been carried out primarily by means of mammography and occasionally supplementary ultrasound. These check-ups have become established because early diagnosis of breast cancer increases the chances of recovery. Breast cancer is the most common cancer in women (approximately every 8th woman is affected). While the MRI examination, which offers a high level of sensitivity and specificity, has so far established itself as a further clarification, new examination methods have emerged in the recent past, which on the one hand make the examination more pleasant for women (e. g. no compression of the mammary gland tissue, as is the case with mammography) and which could potentially be diagnostically equivalent. In particular, this article mentions automatic breast ultrasound (ABUS) and computer tomography of the breast (breast CT). In the future, programs with artificial intelligence could also help confirm the diagnoses or increase accuracy so that no relevant lesion is overlooked.

Boss Andreas, Rohrer Lysiane, Berger Nicole

2020

Surgery Surgery

Distinct differences in gut microbial composition and functional potential from lean to morbidly obese subjects.

In Journal of internal medicine

INTRODUCTION : The gut microbiome may contribute to the development of obesity. So far, the extent of microbiome variation in people with obesity has not been determined in large cohorts and for a wide range of body mass index (BMI). Here, we aimed to investigate whether the faecal microbial metagenome can explain the variance in several clinical phenotypes associated with morbid obesity.

METHODS : Caucasian subjects were recruited at our hospital. Blood pressure and anthropometric measurements were taken. Dietary intake was determined using questionnaires. Shotgun metagenomic sequencing was performed on faecal samples from 177 subjects.

RESULTS : Subjects without obesity (n = 82, BMI 24.7 ± 2.9 kg m-2 ) and subjects with obesity (n = 95, BMI 38.6 ± 5.1 kg m-2 ) could be clearly distinguished based on microbial composition and microbial metabolic pathways. A total number of 52 bacterial species differed significantly in people with and without obesity. Independent of dietary intake, we found that microbial pathways involved in biosynthesis of amino acids were enriched in subjects with obesity, whereas pathways involved in the degradation of amino acids were depleted. Machine learning models showed that more than half of the variance in body fat composition followed by BMI could be explained by the gut microbiome composition and microbial metabolic pathways, compared to 6% of variation explained in triglycerides and 9% in HDL.

CONCLUSION : Based on the faecal microbiota composition, we were able to separate subjects with and without obesity. In addition, we found strong associations between gut microbial amino acid metabolism and specific microbial species in relation to clinical features of obesity.

Meijnikman A S, Aydin O, Prodan A, Tremaroli V, Herrema H, Levin E, Acherman Y, Bruin S, Gerdes V E, Backhed F, Groen A K, Nieuwdorp M

2020-Jul-07

amino acids, gut microbiome, histidine, lipids, machine learning, metabolism, obesity

General General

A-learning: A new formulation of associative learning theory.

In Psychonomic bulletin & review

We present a new mathematical formulation of associative learning focused on non-human animals, which we call A-learning. Building on current animal learning theory and machine learning, A-learning is composed of two learning equations, one for stimulus-response values and one for stimulus values (conditioned reinforcement). A third equation implements decision-making by mapping stimulus-response values to response probabilities. We show that A-learning can reproduce the main features of: instrumental acquisition, including the effects of signaled and unsignaled non-contingent reinforcement; Pavlovian acquisition, including higher-order conditioning, omission training, autoshaping, and differences in form between conditioned and unconditioned responses; acquisition of avoidance responses; acquisition and extinction of instrumental chains and Pavlovian higher-order conditioning; Pavlovian-to-instrumental transfer; Pavlovian and instrumental outcome revaluation effects, including insight into why these effects vary greatly with training procedures and with the proximity of a response to the reinforcer. We discuss the differences between current theory and A-learning, such as its lack of stimulus-stimulus and response-stimulus associations, and compare A-learning with other temporal-difference models from machine learning, such as Q-learning, SARSA, and the actor-critic model. We conclude that A-learning may offer a more convenient view of associative learning than current mathematical models, and point out areas that need further development.

Ghirlanda Stefano, Lind Johan, Enquist Magnus

2020-Jul-06

Associative learning, Conditioned reinforcement, Instrumental conditioning, Mathematical model, Outcome revaluation, Pavlovian conditioning

General General

Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles.

In Science and engineering ethics

Autonomous vehicles (AVs)-and accidents they are involved in-attest to the urgent need to consider the ethics of artificial intelligence (AI). The question dominating the discussion so far has been whether we want AVs to behave in a 'selfish' or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The agent-deed-consequence (ADC) model (Dubljević and Racine in AJOB Neurosci 5(4):3-20, 2014a, Behav Brain Sci 37(5):487-488, 2014b) provides a promising descriptive and normative account while also lending itself well to implementation in AI. The ADC model explains moral judgments by breaking them down into positive or negative intuitive evaluations of the agent, deed, and consequence in any given situation. These intuitive evaluations combine to produce a positive or negative judgment of moral acceptability. For example, the overall judgment of moral acceptability in a situation in which someone committed a deed that is judged as negative (e.g., breaking a law) would be mitigated if the agent had good intentions and the action had a good consequence. This explains the considerable flexibility and stability of human moral judgment that has yet to be replicated in AI. This paper examines the advantages and disadvantages of implementing the ADC model and how the model could inform future work on ethics of AI in general.

Dubljević Veljko

2020-Jul-06

Agent–deed–consequence (ADC) model, Artificial intelligence (AI), Artificial morality, Artificial neural networks, Autonomous vehicles (AVs), Neuroethics

General General

Correcting the Brain? The Convergence of Neuroscience, Neurotechnology, Psychiatry, and Artificial Intelligence.

In Science and engineering ethics

The incorporation of neural-based technologies into psychiatry offers novel means to use neural data in patient assessment and clinical diagnosis. However, an over-optimistic technologisation of neuroscientifically-informed psychiatry risks the conflation of technological and psychological norms. Neurotechnologies promise fast, efficient, broad psychiatric insights not readily available through conventional observation of patients. Recording and processing brain signals provides information from 'beneath the skull' that can be interpreted as an account of neural processing and that can provide a basis to evaluate general behaviour and functioning. But it ought not to be forgotten that the use of such technologies is part of a human practice of neuroscience informed psychiatry. This paper notes some challenges in the integration of neural technologies into psychiatry and suggests vigilance particularly in respect to normative challenges. In this way, psychiatry can avoid a drift toward reductive technological approaches, while nonetheless benefitting from promising advances in neuroscience and technology.

Rainey Stephen, Erden Yasemin J

2020-Jul-06

Artificial intelligence, Ethics, Neurotechnology, Normativity, Psychiatry, Psychology

General General

AI-based investigation of molecular biomarkers of longevity.

In Biogerontology

In this paper, I build deep neural networks of various structures and hyperparameters in order to predict human chronological age based on open-access biochemical indicators and their specifications from the NHANES database. In total, 1152 neural networks are trained and tested. The algorithms are trained and tested on incomplete data: missing values in data records are extrapolated by mean or median values for each parameter. I select the best neural networks in terms of validation accuracy (coefficient of determination and mean absolute error). It turns out that the most accurate results are delivered by multilayer networks (6 layers) with recurrent layers. Neural network types are selected by trial and error. The algorithms reached an accuracy of 78% in terms of coefficient of determination and 6.5 in terms of mean absolute error. I also list empirically determined features of neural networks that increase accuracy for the task of chronological age prediction. Obtained results can be considered as an approximation of human biological age. Parameters in training datasets are selected the most broadly: all potentially relevant parameters (926) from the NHANES database are used. Although the networks are trained on the incomplete data, they demonstrated the ability to make reasonable predictions (with R2 > 0.7) based on no more than 100 biochemical indicators. Hence, for practical reasons the full data on each of 926 indicators are not required, although the analysis of the impact of each indicator is useful for theoretical developments.

Kendiukhov Ihor

2020-Jul-06

AI in biogerontology, Age prediction, Deep neural networks, Longevity biomarkers, Machine learning

Radiology Radiology

Artificial Intelligence and Myocardial Contrast Enhancement Pattern.

In Current cardiology reports

PURPOSE OF REVIEW : Machine learning (ML) and deep learning (DL) are two important categories of AI algorithms. Nowadays, AI technology has been gradually applied to cardiac magnetic resonance imaging (CMRI), covering the fields of myocardial contrast enhancement (MCE) pattern and automatic ventricular segmentation. This paper mainly discusses the relationship between machine learning and deep learning based on AI and pattern of MCE in CMRI.

RECENT FINDINGS : It found that some histogram and GLCM parameters in ML algorithm had significant statistical differences in diagnosis of cardiomyopathy and differentiation of fibrosis and normal myocardial tissue. In the DL algorithm, there was no significant difference between CNN and observers in measuring myocardial fibrosis. The rapid development of texture parameter analysis methods would promote the medical imaging based on AI into a new era. Histogram and GLCM parameters are the research hotspot of unsupervised learning of MCE images. CNN has a great advantage in automatically identifying and quantifying myocardial fibrosis reflected by LGE images.

Tang Fang, Bai Chen, Zhao Xin-Xiang, Yuan Wei-Feng

2020-Jul-07

AI, CMRI, Cardiomyocyte, Contrast enhancement, Fibrosis, Necrosis

General General

Endocytoscopy: technology and clinical application in the lower GI tract.

In Translational gastroenterology and hepatology

Endocytoscopy (EC) is now one of the valuable technologies in diagnosing colorectal tumors. Providing ultra-high-resolution white light images (520×), endocytoscopy attains the so called virtual histology or optical biopsy, making it a promising tool to diagnose colorectal lesions. Recent studies about artificial intelligence (AI) or computer aided diagnosis (CAD) are also increasingly reported. We investigate the current application of endocytoscopy, as well as the benefit of AI and CAD. Furthermore, we performed a meta-analysis comparing the diagnostic performance of endocytoscopy and magnified chromoendoscopy. In conclusion, this systematic review and meta-analysis supports the recent finding indicating the higher diagnostic performance of endocytoscope in the depth assessment of colorectal neoplasms.

Takamaru Hiroyuki, Wu Shih Yea Sylvia, Saito Yutaka

2020

Endocytoscopy, artificial intelligence (AI), computer aided diagnosis (CAD), depth diagnosis

General General

Artificial intelligence and COVID-19: A multidisciplinary approach.

In Integrative medicine research ; h5-index 20.0

The COVID-19 pandemic is taking a colossal toll in human suffering and lives. A significant amount of new scientific research and data sharing is underway due to the pandemic which is still rapidly spreading. There is now a growing amount of coronavirus related datasets as well as published papers that must be leveraged along with artificial intelligence (AI) to fight this pandemic by driving news approaches to drug discovery, vaccine development, and public awareness. AI can be used to mine this avalanche of new data and papers to extract new insights by cross-referencing papers and searching for patterns that AI algorithms could help discover new possible treatments or help in vaccine development. Drug discovery is not a trivial task and AI technologies like deep learning can help accelerate this process by helping predict which existing drugs, or brand-new drug-like molecules could treat COVID-19. AI techniques can also help disseminate vital information across the globe and reduce the spread of false information about COVID-19. The positive power and potential of AI must be harnessed in the fight to slow the spread of COVID-19 in order to save lives and limit the economic havoc due to this horrific disease.

Ahuja Abhimanyu S, Reddy Vineet Pasam, Marques Oge

2020-Sep

Artificial intelligence, COVID-19, Drug Discovery, Integrative medicine, Vaccine development

General General

Diagnostic performance of artificial intelligence to detect genetic diseases with facial phenotypes: A protocol for systematic review and meta analysis.

In Medicine

BACKGROUND : Many genetic diseases are known to have distinctive facial phenotypes, which are highly informative to provide an opportunity for automated detection. However, the diagnostic performance of artificial intelligence to identify genetic diseases with facial phenotypes requires further investigation. The objectives of this systematic review and meta-analysis are to evaluate the diagnostic accuracy of artificial intelligence to identify the genetic diseases with face phenotypes and then find the best algorithm.

METHODS : The systematic review will be conducted in accordance with the "Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols" guidelines. The following electronic databases will be searched: PubMed, Web of Science, IEEE, Ovid, Cochrane Library, EMBASE and China National Knowledge Infrastructure. Two reviewers will screen and select the titles and abstracts of the studies retrieved independently during the database searches and perform full-text reviews and extract available data. The main outcome measures include diagnostic accuracy, as defined by accuracy, recall, specificity, and precision. The descriptive forest plot and summary receiver operating characteristic curves will be used to represent the performance of diagnostic tests. Subgroup analysis will be performed for different algorithms aided diagnosis tests. The quality of study characteristics and methodology will be assessed using the Quality Assessment of Diagnostic Accuracy Studies 2 tool. Data will be synthesized by RevMan 5.3 and Meta-disc 1.4 software.

RESULTS : The findings of this systematic review and meta-analysis will be disseminated in a relevant peer-reviewed journal and academic presentations.

CONCLUSION : To our knowledge, there have not been any systematic review or meta-analysis relating to diagnosis performance of artificial intelligence in identifying the genetic diseases with face phenotypes. The findings would provide evidence to formulate a comprehensive understanding of applications using artificial intelligence in identifying the genetic diseases with face phenotypes and add considerable value in the future of precision medicine.

OSF REGISTRATION : DOI 10.17605/OSF.IO/P9KUH.

Qin Bosheng, Quan Qiyao, Wu Jingchao, Liang Letian, Li Dongxiao

2020-Jul-02

General General

Update on therapeutic approaches and emerging therapies for SARS-CoV-2 virus.

In European journal of pharmacology ; h5-index 57.0

The global pandemic of coronavirus disease 2019 (COVID-19), caused by novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has resulted in over 7,273,958 cases with almost over 413,372 deaths worldwide as per the WHO situational report 143 on COVID-19. There are no known treatment regimens with proven efficacy and vaccines thus far, posing an unprecedented challenge to identify effective drugs and vaccines for prevention and treatment. The urgency for its prevention and cure has resulted in an increased number of proposed treatment options. The high rate and volume of emerging clinical trials on therapies for COVID-19 need to be compared and evaluated to provide scientific evidence for effective medical options. Other emerging non-conventional drug discovery techniques such as bioinformatics and cheminformatics, structure-based drug design, network-based methods for prediction of drug-target interactions, artificial intelligence (AI) and machine learning (ML) and phage technique could provide alternative routes to discovering potent Anti-SARS-CoV2 drugs. While drugs are being repurposed and discovered for COVID-19, novel drug delivery systems will be paramount for efficient delivery and avoidance of possible drug resistance. This review describes the proposed drug targets for therapy, and outcomes of clinical trials that have been reported. It also identifies the adopted treatment modalities that are showing promise, and those that have failed as drug candidates. It further highlights various emerging therapies and future strategies for the treatment of COVID-19 and delivery of Anti-SARS-CoV2 drugs.

Omolo Calvin A, Soni Nikki, Fasiku Victoria Oluwaseun, Mackraj Irene, Govender Thirumala

2020-Jul-04

COVID-19, Clinical trials, Drug targets, Re-purposing, SARS-CoV2, Vaccines

Radiology Radiology

Quantifying and Leveraging Predictive Uncertainty for Medical Image Assessment

ArXiv Preprint

The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy.

Florin C. Ghesu, Bogdan Georgescu, Awais Mansoor, Youngjin Yoo, Eli Gibson, R. S. Vishwanath, Abishek Balachandran, James M. Balter, Yue Cao, Ramandeep Singh, Subba R. Digumarthy, Mannudeep K. Kalra, Sasa Grbic, Dorin Comaniciu

2020-07-08

General General

Sequence-specific prediction of the efficiencies of adenine and cytosine base editors.

In Nature biotechnology ; h5-index 151.0

Base editors, including adenine base editors (ABEs)1 and cytosine base editors (CBEs)2,3, are widely used to induce point mutations. However, determining whether a specific nucleotide in its genomic context can be edited requires time-consuming experiments. Furthermore, when the editable window contains multiple target nucleotides, various genotypic products can be generated. To develop computational tools to predict base-editing efficiency and outcome product frequencies, we first evaluated the efficiencies of an ABE and a CBE and the outcome product frequencies at 13,504 and 14,157 target sequences, respectively, in human cells. We found that there were only modest asymmetric correlations between the activities of the base editors and Cas9 at the same targets. Using deep-learning-based computational modeling, we built tools to predict the efficiencies and outcome frequencies of ABE- and CBE-directed editing at any target sequence, with Pearson correlations ranging from 0.50 to 0.95. These tools and results will facilitate modeling and therapeutic correction of genetic diseases by base editing.

Song Myungjae, Kim Hui Kwon, Lee Sungtae, Kim Younggwang, Seo Sang-Yeon, Park Jinman, Choi Jae Woo, Jang Hyewon, Shin Jeong Hong, Min Seonwoo, Quan Zhejiu, Kim Ji Hun, Kang Hoon Chul, Yoon Sungroh, Kim Hyongbum Henry

2020-Jul-06

General General

Metagenome-wide association analysis identifies microbial determinants of post-antibiotic ecological recovery in the gut.

In Nature ecology & evolution

Loss of diversity in the gut microbiome can persist for extended periods after antibiotic treatment, impacting microbiome function, antimicrobial resistance and probably host health. Despite widespread antibiotic use, our understanding of the species and metabolic functions contributing to gut microbiome recovery is limited. Using data from 4 discovery cohorts in 3 continents comprising >500 microbiome profiles from 117 individuals, we identified 21 bacterial species exhibiting robust association with ecological recovery post antibiotic therapy. Functional and growth-rate analysis showed that recovery is supported by enrichment in specific carbohydrate-degradation and energy-production pathways. Association rule mining on 782 microbiome profiles from the MEDUSA database enabled reconstruction of the gut microbial 'food web', identifying many recovery-associated bacteria as keystone species, with the ability to use host- and diet-derived energy sources, and support repopulation of other gut species. Experiments in a mouse model recapitulated the ability of recovery-associated bacteria (Bacteroides thetaiotaomicron and Bifidobacterium adolescentis) to promote recovery with synergistic effects, providing a boost of two orders of magnitude to microbial abundance in early time points and faster maturation of microbial diversity. The identification of specific species and metabolic functions promoting recovery opens up opportunities for rationally determining pre- and probiotic formulations offering protection from long-term consequences of frequent antibiotic usage.

Chng Kern Rei, Ghosh Tarini Shankar, Tan Yi Han, Nandi Tannistha, Lee Ivor Russel, Ng Amanda Hui Qi, Li Chenhao, Ravikrishnan Aarthi, Lim Kar Mun, Lye David, Barkham Timothy, Raman Karthik, Chen Swaine L, Chai Louis, Young Barnaby, Gan Yunn-Hwen, Nagarajan Niranjan

2020-Jul-06

Surgery Surgery

Automated spheroid generation, drug application and efficacy screening using a deep learning classification: a feasibility study.

In Scientific reports ; h5-index 158.0

The last two decades saw the establishment of three-dimensional (3D) cell cultures as an acknowledged tool to investigate cell behaviour in a tissue-like environment. Cells growing in spheroids differentiate and develop different characteristics in comparison to their two-dimensionally grown counterparts and are hence seen to exhibit a more in vivo-like phenotype. However, generating, treating and analysing spheroids in high quantities remains labour intensive and therefore limits its applicability in drugs and compound research. Here we present a fully automated pipetting robot that is able to (a) seed hanging drops from single cell suspensions, (b) treat the spheroids formed in these hanging drops with drugs and (c) analyse the viability of the spheroids by an image-based deep learning based convolutional neuronal network (CNN). The model is trained to classify between 'unaffected', 'mildly affected' and 'affected' spheroids after drug exposure. All corresponding spheroids are initially analysed by viability flow cytometry analysis to build a labelled training set for the CNN to subsequently reduce the number of misclassifications. Hence, this approach allows to efficiently examine the efficacy of drug combinatorics or new compounds in 3D cell culture. Additionally, it may provide a valuable instrument to screen for new and individualized systemic therapeutic strategies in second and third line treatment of solid malignancies using patient derived primary cells.

Benning Leo, Peintner Andreas, Finkenzeller Günter, Peintner Lukas

2020-Jul-06

oncology Oncology

Predicting breast cancer risk using interacting genetic and demographic factors and machine learning.

In Scientific reports ; h5-index 158.0

Breast cancer (BC) is a multifactorial disease and the most common cancer in women worldwide. We describe a machine learning approach to identify a combination of interacting genetic variants (SNPs) and demographic risk factors for BC, especially factors related to both familial history (Group 1) and oestrogen metabolism (Group 2), for predicting BC risk. This approach identifies the best combinations of interacting genetic and demographic risk factors that yield the highest BC risk prediction accuracy. In tests on the Kuopio Breast Cancer Project (KBCP) dataset, our approach achieves a mean average precision (mAP) of 77.78 in predicting BC risk by using interacting genetic and Group 1 features, which is better than the mAPs of 74.19 and 73.65 achieved using only Group 1 features and interacting SNPs, respectively. Similarly, using interacting genetic and Group 2 features yields a mAP of 78.00, which outperforms the system based on only Group 2 features, which has a mAP of 72.57. Furthermore, the gene interaction maps built from genes associated with SNPs that interact with demographic risk factors indicate important BC-related biological entities, such as angiogenesis, apoptosis and oestrogen-related networks. The results also show that demographic risk factors are individually more important than genetic variants in predicting BC risk.

Behravan Hamid, Hartikainen Jaana M, Tengström Maria, Kosma Veli-Matti, Mannermaa Arto

2020-Jul-06

Radiology Radiology

β-amyloid and tau drive early Alzheimer's disease decline while glucose hypometabolism drives late decline.

In Communications biology

Clinical trials focusing on therapeutic candidates that modify β-amyloid (Aβ) have repeatedly failed to treat Alzheimer's disease (AD), suggesting that Aβ may not be the optimal target for treating AD. The evaluation of Aβ, tau, and neurodegenerative (A/T/N) biomarkers has been proposed for classifying AD. However, it remains unclear whether disturbances in each arm of the A/T/N framework contribute equally throughout the progression of AD. Here, using the random forest machine learning method to analyze participants in the Alzheimer's Disease Neuroimaging Initiative dataset, we show that A/T/N biomarkers show varying importance in predicting AD development, with elevated biomarkers of Aβ and tau better predicting early dementia status, and biomarkers of neurodegeneration, especially glucose hypometabolism, better predicting later dementia status. Our results suggest that AD treatments may also need to be disease stage-oriented with Aβ and tau as targets in early AD and glucose metabolism as a target in later AD.

Hammond Tyler C, Xing Xin, Wang Chris, Ma David, Nho Kwangsik, Crane Paul K, Elahi Fanny, Ziegler David A, Liang Gongbo, Cheng Qiang, Yanckello Lucille M, Jacobs Nathan, Lin Ai-Ling

2020-Jul-06

Pathology Pathology

Improving the accuracy of gastrointestinal neuroendocrine tumor grading with deep learning.

In Scientific reports ; h5-index 158.0

The Ki-67 index is an established prognostic factor in gastrointestinal neuroendocrine tumors (GI-NETs) and defines tumor grade. It is currently estimated by microscopically examining tumor tissue single-immunostained (SS) for Ki-67 and counting the number of Ki-67-positive and Ki-67-negative tumor cells within a subjectively picked hot-spot. Intraobserver variability in this procedure as well as difficulty in distinguishing tumor from non-tumor cells can lead to inaccurate Ki-67 indices and possibly incorrect tumor grades. We introduce two computational tools that utilize Ki-67 and synaptophysin double-immunostained (DS) slides to improve the accuracy of Ki-67 index quantitation in GI-NETs: (1) Synaptophysin-KI-Estimator (SKIE), a pipeline automating Ki-67 index quantitation via whole-slide image (WSI) analysis and (2) deep-SKIE, a deep learner-based approach where a Ki-67 index heatmap is generated throughout the tumor. Ki-67 indices for 50 GI-NETs were quantitated using SKIE and compared with DS slide assessments by three pathologists using a microscope and a fourth pathologist via manually ticking off each cell, the latter of which was deemed the gold standard (GS). Compared to the GS, SKIE achieved a grading accuracy of 90% and substantial agreement (linear-weighted Cohen's kappa 0.62). Using DS WSIs, deep-SKIE displayed a training, validation, and testing accuracy of 98.4%, 90.9%, and 91.0%, respectively, significantly higher than using SS WSIs. Since DS slides are not standard clinical practice, we also integrated a cycle generative adversarial network into our pipeline to transform SS into DS WSIs. The proposed methods can improve accuracy and potentially save a significant amount of time if implemented into clinical practice.

Govind Darshana, Jen Kuang-Yu, Matsukuma Karen, Gao Guofeng, Olson Kristin A, Gui Dorina, Wilding Gregory E, Border Samuel P, Sarder Pinaki

2020-Jul-06

oncology Oncology

DoseGAN: a generative adversarial network for synthetic dose prediction using attention-gated discrimination and generation.

In Scientific reports ; h5-index 158.0

Deep learning algorithms have recently been developed that utilize patient anatomy and raw imaging information to predict radiation dose, as a means to increase treatment planning efficiency and improve radiotherapy plan quality. Current state-of-the-art techniques rely on convolutional neural networks (CNNs) that use pixel-to-pixel loss to update network parameters. However, stereotactic body radiotherapy (SBRT) dose is often heterogeneous, making it difficult to model using pixel-level loss. Generative adversarial networks (GANs) utilize adversarial learning that incorporates image-level loss and is better suited to learn from heterogeneous labels. However, GANs are difficult to train and rely on compromised architectures to facilitate convergence. This study suggests an attention-gated generative adversarial network (DoseGAN) to improve learning, increase model complexity, and reduce network redundancy by focusing on relevant anatomy. DoseGAN was compared to alternative state-of-the-art dose prediction algorithms using heterogeneity index, conformity index, and various dosimetric parameters. All algorithms were trained, validated, and tested using 141 prostate SBRT patients. DoseGAN was able to predict more realistic volumetric dosimetry compared to all other algorithms and achieved statistically significant improvement compared to all alternative algorithms for the V100 and V120 of the PTV, V60 of the rectum, and heterogeneity index.

Kearney Vasant, Chan Jason W, Wang Tianqi, Perry Alan, Descovich Martina, Morin Olivier, Yom Sue S, Solberg Timothy D

2020-Jul-06

General General

Adversarial super-resolution of climatological wind and solar data.

In Proceedings of the National Academy of Sciences of the United States of America

Accurate and high-resolution data reflecting different climate scenarios are vital for policy makers when deciding on the development of future energy resources, electrical infrastructure, transportation networks, agriculture, and many other societally important systems. However, state-of-the-art long-term global climate simulations are unable to resolve the spatiotemporal characteristics necessary for resource assessment or operational planning. We introduce an adversarial deep learning approach to super resolve wind velocity and solar irradiance outputs from global climate models to scales sufficient for renewable energy resource assessment. Using adversarial training to improve the physical and perceptual performance of our networks, we demonstrate up to a [Formula: see text] resolution enhancement of wind and solar data. In validation studies, the inferred fields are robust to input noise, possess the correct small-scale properties of atmospheric turbulent flow and solar irradiance, and retain consistency at large scales with coarse data. An additional advantage of our fully convolutional architecture is that it allows for training on small domains and evaluation on arbitrarily-sized inputs, including global scale. We conclude with a super-resolution study of renewable energy resources based on climate scenario data from the Intergovernmental Panel on Climate Change's Fifth Assessment Report.

Stengel Karen, Glaws Andrew, Hettinger Dylan, King Ryan N

2020-Jul-06

adversarial training, climate downscaling, deep learning

General General

Universal inference.

In Proceedings of the National Academy of Sciences of the United States of America

We propose a general method for constructing confidence sets and hypothesis tests that have finite-sample guarantees without regularity conditions. We refer to such procedures as "universal." The method is very simple and is based on a modified version of the usual likelihood-ratio statistic that we call "the split likelihood-ratio test" (split LRT) statistic. The (limiting) null distribution of the classical likelihood-ratio statistic is often intractable when used to test composite null hypotheses in irregular statistical models. Our method is especially appealing for statistical inference in these complex setups. The method we suggest works for any parametric model and also for some nonparametric models, as long as computing a maximum-likelihood estimator (MLE) is feasible under the null. Canonical examples arise in mixture modeling and shape-constrained inference, for which constructing tests and confidence sets has been notoriously difficult. We also develop various extensions of our basic methods. We show that in settings when computing the MLE is hard, for the purpose of constructing valid tests and intervals, it is sufficient to upper bound the maximum likelihood. We investigate some conditions under which our methods yield valid inferences under model misspecification. Further, the split LRT can be used with profile likelihoods to deal with nuisance parameters, and it can also be run sequentially to yield anytime-valid P values and confidence sequences. Finally, when combined with the method of sieves, it can be used to perform model selection with nested model classes.

Wasserman Larry, Ramdas Aaditya, Balakrishnan Sivaraman

2020-Jul-06

confidence sequence, irregular models, likelihood, testing

oncology Oncology

A serum protein classifier identifying patients with advanced non-small cell lung cancer who derive clinical benefit from treatment with immune checkpoint inhibitors.

In Clinical cancer research : an official journal of the American Association for Cancer Research

PURPOSE : Pretreatment selection of non-small-cell lung cancer (NSCLC) patients who derive clinical benefit from treatment with immune checkpoint inhibitors would fulfill an unmet clinical need by reducing unnecessary toxicities from treatment and result in substantial health care savings.

PATIENTS AND METHODS : In a retrospective study, mass spectrometry (MS) based proteomic analysis was performed on pretreatment sera derived from advanced NSCLC patients treated with nivolumab as part of routine clinical care (n=289). Machine learning combined spectral and clinical data to stratify patients into three groups with good ("sensitive"), intermediate and poor ("resistant") outcomes following treatment in the second-line setting. The test was applied to three independent patient cohorts and its biology investigated using protein set enrichment analyses (PSEA).

RESULTS : A signature consisting of 274 MS features derived from a development set of 116 patients was associated with progression free survival (PFS) and overall survival (OS) across 2 validation cohorts (n=98 and n=75). In pooled analysis, significantly better OS was demonstrated for "sensitive" relative to "not sensitive" patients treated with nivolumab, HR 0.58 (95% CI 0.38-0-87, p=0.009). There was no significant association with clinical factors including PD-L1 expression, available from 133/289 patients. The test demonstrated no significant association with PFS or OS in a historical cohort (n=68) of second-line NSCLC patients treated with docetaxel. PSEA revealed proteomic classification to be significantly associated with complement and wound healing cascades.

CONCLUSIONS : This serum-derived protein signature successfully stratified outcomes in cohorts of advanced NSCLC patients treated with second line PD-1 checkpoint inhibitors and deserves further prospective study.

Muller Mirte, Hummelink Karlijn, Hurkmans Daan P, Niemeijer Anna-Larissa N, Monkhorst Kim, Roder Joanna, Oliveira Carlos, Roder Heinrich, Aerts Joachim G, Smit Egbert F

2020-Jul-06

Radiology Radiology

A deep learning-based automated diagnostic system for classifying mammographic lesions.

In Medicine

BACKGROUND : Screening mammography has led to reduced breast cancer-specific mortality and is recommended worldwide. However, the resultant doctors' workload of reading mammographic scans needs to be addressed. Although computer-aided detection (CAD) systems have been developed to support readers, the findings are conflicting regarding whether traditional CAD systems improve reading performance. Rapid progress in the artificial intelligence (AI) field has led to the advent of newer CAD systems using deep learning-based algorithms which have the potential to reach human performance levels. Those systems, however, have been developed using mammography images mainly from women in western countries. Because Asian women characteristically have higher-density breasts, it is uncertain whether those AI systems can apply to Japanese women. In this study, we will construct a deep learning-based CAD system trained using mammography images from a large number of Japanese women with high quality reading.

METHODS : We will collect digital mammography images taken for screening or diagnostic purposes at multiple institutions in Japan. A total of 15,000 images, consisting of 5000 images with breast cancer and 10,000 images with benign lesions, will be collected. At least 1000 images of normal breasts will also be collected for use as reference data. With these data, we will construct a deep learning-based AI system to detect breast cancer on mammograms. The primary endpoint will be the sensitivity and specificity of the AI system with the test image set.

DISCUSSION : When the ability of AI reading is shown to be on a par with that of human reading, images of normal breasts or benign lesions that do not have to be read by a human can be selected by AI beforehand. Our AI might work well in Asian women who have similar breast density, size, and shape to those of Japanese women.

TRIAL REGISTRATION : UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.

Yamaguchi Takeshi, Inoue Kenichi, Tsunoda Hiroko, Uematsu Takayoshi, Shinohara Norimitsu, Mukai Hirofumi

2020-Jul-02

General General

Artificial intelligence and COVID-19: A multidisciplinary approach.

In Integrative medicine research ; h5-index 20.0

The COVID-19 pandemic is taking a colossal toll in human suffering and lives. A significant amount of new scientific research and data sharing is underway due to the pandemic which is still rapidly spreading. There is now a growing amount of coronavirus related datasets as well as published papers that must be leveraged along with artificial intelligence (AI) to fight this pandemic by driving news approaches to drug discovery, vaccine development, and public awareness. AI can be used to mine this avalanche of new data and papers to extract new insights by cross-referencing papers and searching for patterns that AI algorithms could help discover new possible treatments or help in vaccine development. Drug discovery is not a trivial task and AI technologies like deep learning can help accelerate this process by helping predict which existing drugs, or brand-new drug-like molecules could treat COVID-19. AI techniques can also help disseminate vital information across the globe and reduce the spread of false information about COVID-19. The positive power and potential of AI must be harnessed in the fight to slow the spread of COVID-19 in order to save lives and limit the economic havoc due to this horrific disease.

Ahuja Abhimanyu S, Reddy Vineet Pasam, Marques Oge

2020-Sep

Artificial intelligence, COVID-19, Drug Discovery, Integrative medicine, Vaccine development

Radiology Radiology

Labelling imaging datasets on the basis of neuroradiology reports: a validation study

ArXiv Preprint

Natural language processing (NLP) shows promise as a means to automate the labelling of hospital-scale neuroradiology magnetic resonance imaging (MRI) datasets for computer vision applications. To date, however, there has been no thorough investigation into the validity of this approach, including determining the accuracy of report labels compared to image labels as well as examining the performance of non-specialist labellers. In this work, we draw on the experience of a team of neuroradiologists who labelled over 5000 MRI neuroradiology reports as part of a project to build a dedicated deep learning-based neuroradiology report classifier. We show that, in our experience, assigning binary labels (i.e. normal vs abnormal) to images from reports alone is highly accurate. In contrast to the binary labels, however, the accuracy of more granular labelling is dependent on the category, and we highlight reasons for this discrepancy. We also show that downstream model performance is reduced when labelling of training reports is performed by a non-specialist. To allow other researchers to accelerate their research, we make our refined abnormality definitions and labelling rules available, as well as our easy-to-use radiology report labelling app which helps streamline this process.

David A. Wood, Sina Kafiabadi, Aisha Al Busaidi, Emily Guilhem, Jeremy Lynch, Matthew Townend, Antanas Montvila, Juveria Siddiqui, Naveen Gadapa, Matthew Benger, Gareth Barker, Sebastian Ourselin, James H. Cole, Thomas C. Booth

2020-07-08

General General

MASS: predict the global qualities of individual protein models using random forests and novel statistical potentials.

In BMC bioinformatics

BACKGROUND : Protein model quality assessment (QA) is an essential procedure in protein structure prediction. QA methods can predict the qualities of protein models and identify good models from decoys. Clustering-based methods need a certain number of models as input. However, if a pool of models are not available, methods that only need a single model as input are indispensable.

RESULTS : We developed MASS, a QA method to predict the global qualities of individual protein models using random forests and various novel energy functions. We designed six novel energy functions or statistical potentials that can capture the structural characteristics of a protein model, which can also be used in other protein-related bioinformatics research. MASS potentials demonstrated higher importance than the energy functions of RWplus, GOAP, DFIRE and Rosetta when the scores they generated are used as machine learning features. MASS outperforms almost all of the four CASP11 top-performing single-model methods for global quality assessment in terms of all of the four evaluation criteria officially used by CASP, which measure the abilities to assign relative and absolute scores, identify the best model from decoys, and distinguish between good and bad models. MASS has also achieved comparable performances with the leading QA methods in CASP12 and CASP13.

CONCLUSIONS : MASS and the source code for all MASS potentials are publicly available at http://dna.cs.miami.edu/MASS/ .

Liu Tong, Wang Zheng

2020-Jul-06

Protein energy potentials, Protein model quality assessment, Random forests, Single-model QA

General General

Drug-target interaction prediction using semi-bipartite graph model and deep learning.

In BMC bioinformatics

BACKGROUND : Identifying drug-target interaction is a key element in drug discovery. In silico prediction of drug-target interaction can speed up the process of identifying unknown interactions between drugs and target proteins. In recent studies, handcrafted features, similarity metrics and machine learning methods have been proposed for predicting drug-target interactions. However, these methods cannot fully learn the underlying relations between drugs and targets. In this paper, we propose anew framework for drug-target interaction prediction that learns latent features from drug-target interaction network.

RESULTS : We present a framework to utilize the network topology and identify interacting and non-interacting drug-target pairs. We model the problem as a semi-bipartite graph in which we are able to use drug-drug and protein-protein similarity in a drug-protein network. We have then used a graph labeling method for vertex ordering in our graph embedding process. Finally, we employed deep neural network to learn the complex pattern of interacting pairs from embedded graphs. We show our approach is able to learn sophisticated drug-target topological features and outperforms other state-of-the-art approaches.

CONCLUSIONS : The proposed learning model on semi-bipartite graph model, can integrate drug-drug and protein-protein similarities which are semantically different than drug-protein information in a drug-target interaction network. We show our model can determine interaction likelihood for each drug-target pair and outperform other heuristics.

Eslami Manoochehri Hafez, Nourani Mehrdad

2020-Jul-06

Deep learning, Drug-target interaction, Link prediction, Weisfeiler-Lehman algorithm

General General

Comparison of smartphone-based retinal imaging systems for diabetic retinopathy detection using deep learning.

In BMC bioinformatics

BACKGROUND : Diabetic retinopathy (DR), the most common cause of vision loss, is caused by damage to the small blood vessels in the retina. If untreated, it may result in varying degrees of vision loss and even blindness. Since DR is a silent disease that may cause no symptoms or only mild vision problems, annual eye exams are crucial for early detection to improve chances of effective treatment where fundus cameras are used to capture retinal image. However, fundus cameras are too big and heavy to be transported easily and too costly to be purchased by every health clinic, so fundus cameras are an inconvenient tool for widespread screening. Recent technological developments have enabled to use of smartphones in designing small-sized, low-power, and affordable retinal imaging systems to perform DR screening and automated DR detection using image processing methods. In this paper, we investigate the smartphone-based portable retinal imaging systems available on the market and compare their image quality and the automatic DR detection accuracy using a deep learning framework.

RESULTS : Based on the results, iNview retinal imaging system has the largest field of view and better image quality compared with iExaminer, D-Eye, and Peek Retina systems. The overall classification accuracy of smartphone-based systems are sorted as 61%, 62%, 69%, and 75% for iExaminer, D-Eye, Peek Retina, and iNview images, respectively. We observed that the network DR detection performance decreases as the field of view of the smartphone-based retinal systems get smaller where iNview is the largest and iExaminer is the smallest.

CONCLUSIONS : The smartphone-based retina imaging systems can be used as an alternative to the direct ophthalmoscope. However, the field of view of the smartphone-based retina imaging systems plays an important role in determining the automatic DR detection accuracy.

Karakaya Mahmut, Hacisoftaoglu Recep E

2020-Jul-06

D-Eye, Deep learning, Diabetic retinopathy, Peek retina, Retinal imaging, iExaminer, iNview

General General

DUGMO: tool for the detection of unknown genetically modified organisms with high-throughput sequencing data for pure bacterial samples.

In BMC bioinformatics

BACKGROUND : The European Community has adopted very restrictive policies regarding the dissemination and use of genetically modified organisms (GMOs). In fact, a maximum threshold of 0.9% of contaminating GMOs is tolerated for a "GMO-free" label. In recent years, imports of undescribed GMOs have been detected. Their sequences are not described and therefore not detectable by conventional approaches, such as PCR.

RESULTS : We developed DUGMO, a bioinformatics pipeline for the detection of genetically modified (GM) bacteria, including unknown GM bacteria, based on Illumina paired-end sequencing data. The method is currently focused on the detection of GM bacteria with - possibly partial - transgenes in pure bacterial samples. In the preliminary steps, coding sequences (CDSs) are aligned through two successive BLASTN against the host pangenome with relevant tuned parameters to discriminate CDSs belonging to the wild type genome (wgCDS) from potential GM coding sequences (pgmCDSs). Then, Bray-Curtis distances are calculated between the wgCDS and each pgmCDS, based on the difference of genomic vocabulary. Finally, two machine learning methods, namely the Random Forest and Generalized Linear Model, are carried out to target true GM CDS(s), based on six variables including Bray-Curtis distances and GC content. Tests carried out on a GM Bacillus subtilis showed 25 positive CDSs corresponding to the chloramphenicol resistance gene and CDSs of the inserted plasmids. On a wild type B. subtilis, no false positive sequences were detected.

CONCLUSION : DUGMO detects exogenous CDS, truncated, fused or highly mutated wild CDSs in high-throughput sequencing data, and was shown to be efficient at detecting GM sequences, but it might also be employed for the identification of recent horizontal gene transfers.

Hurel Julie, Schbath Sophie, Bougeard Stéphanie, Rolland Mathieu, Petrillo Mauro, Touzain Fabrice

2020-Jul-06

Bacteria, Detection, Illumina sequencing data, Machine learning, Unknown GMO

Public Health Public Health

Pride and prejudice - What can we learn from peer review?

In Medical teacher

Objectives: Peer review is a powerful tool that steers the education and practice of medical researchers but may allow biased critique by anonymous reviewers. We explored factors unrelated to research quality that may influence peer review reports, and assessed the possibility that sub-types of reviewers exist. Our findings could potentially improve the peer review process.Methods: We evaluated the harshness, constructiveness and positiveness in 596 reviews from journals with open peer review, plus 46 reviews from colleagues' anonymously reviewed manuscripts. We considered possible influencing factors, such as number of authors and seasonal trends, on the content of the review. Finally, using machine-learning we identified latent types of reviewer with differing characteristics.Results: Reviews provided during a northern-hemisphere winter were significantly harsher, suggesting a seasonal effect on language. Reviews for articles in journals with an open peer review policy were significantly less harsh than those with an anonymous review process. Further, we identified three types of reviewers: nurturing, begrudged, and blasé.Conclusion: Nurturing reviews were in a minority and our findings suggest that more widespread open peer reviewing could improve the educational value of peer review, increase the constructive criticism that encourages researchers, and reduce pride and prejudice in editorial processes.

Le Sueur Helen, Dagliati Arianna, Buchan Iain, Whetton Anthony D, Martin Glen P, Dornan Tim, Geifman Nophar

2020-Jul-06

Peer review, bias, feedback, machine learning, sentiment, subgroup discovery

Surgery Surgery

Recent Advances in the Application of Artificial Intelligence in Otorhinolaryngology-Head and Neck Surgery.

In Clinical and experimental otorhinolaryngology

Objectives : To present an up-to-date survey of the use of artificial intelligence (AI) in the field of otorhinolaryngology, with respect to opportunities, research challenges, and research directions.

Methods : We searched PubMed, the Cochrane Central Register of Controlled Trials, Embase, and the Web of Science. We initially retrieved 458 articles; we excluded non-English publications and duplicates, which resulted in a total of 90 remaining studies. These 90 studies were divided into those analyzing medical images, voice, medical devices, and clinical diagnoses and treatments.

Results : Most studies (42.22%, 38/90) used AI for image-based analysis, followed by clinical diagnosis and treatments (24 studies); each of the remaining two subcategories included 14 studies.

Conclusion : Machine and deep learning have been extensively applied in the field of otorhinolaryngology. However, performance varies and research challenges remain.

Tama Bayu Adhi, Kim Do Hyun, Kim Gyuwon, Lee Seungchul, Kim Soo Whan

2020-Jun-18

Artificial intelligence, deep learning, machine learning, otorhinolaryngology

General General

Deep learning assisted Shack-Hartmann wavefront sensor for direct wavefront detection.

In Optics letters

The conventional Shack-Hartmann wavefront sensor (SHWS) requires wavefront slope measurements of every micro-lens for wavefront reconstruction. In this Letter, we applied deep learning on the SHWS to directly predict the wavefront distributions without wavefront slope measurements. The results show that our method could provide a lower root mean square wavefront error in high detection speed. The performance of the proposed method is also evaluated on challenging wavefronts, while the conventional approaches perform insufficiently. This Letter provides a new approach, to the best of our knowledge, to perform direct wavefront detection in SHWS-based applications.

Hu Lejia, Hu Shuwen, Gong Wei, Si Ke

2020-Jul-01

General General

Optical patching scheme for optical convolutional neural networks based on wavelength-division multiplexing and optical delay lines.

In Optics letters

Recent progress on optical neural networks (ONNs) heralds a new future for efficient deep learning accelerators, and novel, to the best of our knowledge, architectures of optical convolutional neural networks (CNNs) provide potential solutions to the widely adopted convolutional models. So far in optical CNNs, the data patching (a necessary process in the convolutional layer) is mostly executed with electronics, resulting in a demand for large input modulator arrays. Here we experimentally demonstrate an optical patching scheme to release the burden of electronic data processing and to cut down the scale of the input modulator array for optical CNNs. Optical delay lines replace electronics to execute data processing, which can reduce the scale of the input modulator array. The adoption of wavelength-division multiplexing enables a single group of optical delay lines to simultaneously process multiple input data, reducing the system complexity. The optical patching scheme provides a new solution to the problem of data input, which is challenging and concerned with the field of ONNs.

Xu Shaofu, Wang Jing, Zou Weiwen

2020-Jul-01

General General

Combining nonlinear Fourier transform and neural network-based processing in optical communications.

In Optics letters

We propose a method to improve the performance of the nonlinear Fourier transform (NFT)-based optical transmission system by applying the neural network post-processing of the nonlinear spectrum at the receiver. We demonstrate through numerical modeling about one order of magnitude bit error rate improvement and compare this method with machine learning processing based on the classification of the received symbols. The proposed approach also offers a way to improve numerical accuracy of the inverse NFT; therefore, it can find a range of applications beyond optical communications.

Kotlyar Oleksandr, Pankratova Maryna, Kamalian-Kopae Morteza, Vasylchenkova Anastasiia, Prilepsky Jaroslaw E, Turitsyn Sergei K

2020-Jul-01

Radiology Radiology

Multiparametric MRI for Prostate Cancer Characterization: Combined Use of Radiomics Model with PI-RADS and Clinical Parameters.

In Cancers

Radiomics is an emerging field of image analysis with potential applications in patient risk stratification. This study developed and evaluated machine learning models using quantitative radiomic features extracted from multiparametric magnetic resonance imaging (mpMRI) to detect and classify prostate cancer (PCa). In total, 191 patients that underwent prostatic mpMRI and combined targeted and systematic fusion biopsy were retrospectively included. Segmentations of the whole prostate glands and index lesions were performed manually in apparent diffusion coefficient (ADC) maps and T2-weighted MRI. Radiomic features were extracted from regions corresponding to the whole prostate gland and index lesion. The best performing combination of feature setup and classifier was selected to compare its predictive ability of the radiologist's evaluation (PI-RADS), mean ADC, prostate specific antigen density (PSAD) and digital rectal examination (DRE) using receiver operating characteristic (ROC) analysis. Models were evaluated using repeated 5-fold cross-validation and a separate independent test cohort. In the test cohort, an ensemble model combining a radiomics model, with models for PI-RADS, PSAD and DRE achieved high predictive AUCs for the differentiation of (i) malignant from benign prostatic lesions (AUC = 0.889) and of (ii) clinically significant (csPCa) from clinically insignificant PCa (cisPCa) (AUC = 0.844). Our combined model was numerically superior to PI-RADS for cancer detection (AUC = 0.779; p = 0.054) as well as for clinical significance prediction (AUC = 0.688; p = 0.209) and showed a significantly better performance compared to mADC for csPCa prediction (AUC = 0.571; p = 0.022). In our study, radiomics accurately characterizes prostatic index lesions and shows performance comparable to radiologists for PCa characterization. Quantitative image data represent a potential biomarker, which, when combined with PI-RADS, PSAD and DRE, predicts csPCa more accurately than mADC. Prognostic machine learning models could assist in csPCa detection and patient selection for MRI-guided biopsy.

Woźnicki Piotr, Westhoff Niklas, Huber Thomas, Riffel Philipp, Froelich Matthias F, Gresser Eva, von Hardenberg Jost, Mühlberg Alexander, Michel Maurice Stephan, Schoenberg Stefan O, Nörenberg Dominik

2020-Jul-02

PI-RADS, PSA, artificial intelligence, machine learning, magnetic resonance imaging, prostatic neoplasm, radiomics

Public Health Public Health

Efficient GAN-based Chest Radiographs (CXR) augmentation to diagnose coronavirus disease pneumonia.

In International journal of medical sciences

Background: As 2019 ends coronavirus disease start expanding all over the world. It is highly transmissible disease that can affect respiratory tract and can leads to organ failure. In 2020 it is declared by world health organization as "Public health emergency of international concerns". The current situation of Covid-19 and chest related diseases have already gone through radical change with the advancements of image processing tools. There is no effective method which can accurately identify all chest related diseases and tackle the multiple class problems with reliable results. Method: There are many potentially impactful applications of Deep Learning to fighting the Covid-19 from Chest X-Ray/CT Images, however, most are still in their early stages due to lack of data sharing as it continues to inhibit overall progress in a variety of medical research problems. Based on COVID-19 radiographical changes in CT images, this work aims to detect the possibility of COVID-19 in the patient. This work provides a significant contribution in terms of Gan based synthetic data and four different types of deep learning- based models which provided state of the art comparable results. Results: A Deep Neural Network model provides a significant contribution in terms of detecting COVID-19 and provides effective analysis of chest related diseases with respect to age and gender. Our model achieves 89% accuracy in terms of Gan based synthetic data and four different types of deep learning- based models which provided state of the art comparable results. Conclusion: If the gap in identifying of all viral pneumonias is not filled with effective automation of chest disease detection the healthcare industry may have to bear unfavorable circumstances.

Albahli Saleh

2020

Chest diseases, Coronavirus, Deep learning, Inception-V3, ResNet-152, X-ray

General General

Deep Learning for Massive MIMO Channel State Acquisition and Feedback.

In Journal of the Indian Institute of Science

Massive multiple-input multiple-output (MIMO) systems are a main enabler of the excessive throughput requirements in 5G and future generation wireless networks as they can serve many users simultaneously with high spectral and energy efficiency. To achieve this massive MIMO systems require accurate and timely channel state information (CSI), which is acquired by a training process that involves pilot transmission, CSI estimation, and feedback. This training process incurs a training overhead, which scales with the number of antennas, users, and subcarriers. Reducing the training overhead in massive MIMO systems has been a major topic of research since the emergence of the concept. Recently, deep learning (DL)-based approaches have been proposed and shown to provide significant reduction in the CSI acquisition and feedback overhead in massive MIMO systems compared to traditional techniques. In this paper, we present an overview of the state-of-the-art DL architectures and algorithms used for CSI acquisition and feedback, and provide further research directions.

Boloursaz Mashhadi Mahdi, Gündüz Deniz

2020

Channel state information, Deep learning, Massive MIMO

Ophthalmology Ophthalmology

Artificial intelligence method to classify ophthalmic emergency severity based on symptoms: a validation study.

In BMJ open

OBJECTIVES : We investigated the usefulness of machine learning artificial intelligence (AI) in classifying the severity of ophthalmic emergency for timely hospital visits.

STUDY DESIGN : This retrospective study analysed the patients who first visited the Armed Forces Daegu Hospital between May and December 2019. General patient information, events and symptoms were input variables. Events, symptoms, diagnoses and treatments were output variables. The output variables were classified into four classes (red, orange, yellow and green, indicating immediate to no emergency cases). About 200 cases of the class-balanced validation data set were randomly selected before all training procedures. An ensemble AI model using combinations of fully connected neural networks with the synthetic minority oversampling technique algorithm was adopted.

PARTICIPANTS : A total of 1681 patients were included.

MAJOR OUTCOMES : Model performance was evaluated using accuracy, precision, recall and F1 scores.

RESULTS : The accuracy of the model was 99.05%. The precision of each class (red, orange, yellow and green) was 100%, 98.10%, 92.73% and 100%. The recalls of each class were 100%, 100%, 98.08% and 95.33%. The F1 scores of each class were 100%, 99.04%, 95.33% and 96.00%.

CONCLUSIONS : We provided support for an AI method to classify ophthalmic emergency severity based on symptoms.

Ahn Hyunmin

2020-Jul-05

accident & emergency medicine, biotechnology & bioinformatics, ophthalmology

Surgery Surgery

Residents Think in the "Now" and Supervisors Think Ahead in the Operating Room. A Survey Study About Task Perception of Residents and Supervising Surgeons.

In Journal of surgical education

OBJECTIVE : Progressive autonomous task performance is the cornerstone of teaching residents in the operating room, where they are entrusted with autonomy when they meet their supervisors' preferences. To optimize the teaching, supervisors need to be aware of how residents experience parts of the procedure. This study provides insight into how supervisors and residents perceive different tasks of a single surgical procedure.

DESIGN : In this qualitative survey study a cognitive task analysis (CTA) of supervisors and residents for the 47 tasks of an uncemented total hip arthroplasty was executed. Both groups rated the level of attention they would assign to each task and were asked to explain attention scores of 4 or 5.

SETTING : University Medical Centre Groningen (the Netherlands) and its 5 affiliated teaching hospitals.

PARTICIPANTS : Seventeen supervising surgeons and 21 residents.

RESULTS : Normal attention (median attention score 3) was assigned by supervisors to 34 tasks (72.3%) and by residents to 35 tasks (74.5 %). Supervisors rated 12 tasks (25.6%) and residents 9 tasks (19.1%) with a median attention score of 4. In general, supervisors associated high attention with patient outcome and prevention of complications, while residents associated high attention with "effort."

CONCLUSIONS : Supervisors and residents assigned attention to tasks for different reasons. Supervisors think ahead and emphasize patient outcome and prevention of complications when they indicate high attention, while residents think in the "now" and raise attention to execute the tasks themselves. The results of this study allow residents and supervisors to anticipate preferences: residents are able to appreciate why supervisors increase attention to specific tasks, and supervisors obtain information on which tasks require individual guidance of residents. This information can contribute to improve the learning climate in the operating room and task-specific procedural training.

Nieboer Patrick, Cnossen Fokie, Stevens Martin, Huiskes Mike, Bulstra Sjoerd K, Jaarsma Debbie Adc

2020-Jul-02

Faculty development, Intraprocedural variation, Surgical education, Workplace-based learning and teaching

Cardiology Cardiology

In-hospital Prognostic Value of Electrocardiographic Parameters Except ST-Segment Changes in Acute Myocardial Infarction: Literature Review and Future Perspectives.

In Heart, lung & circulation

Electrocardiography (ECG) remains an irreplaceable tool in the management of the patients with myocardial infarction, with evaluation of the QRS and ST segment being the present major focus. Several ECG parameters have already been proposed to have prognostic value with regard to both in-hospital and long-term follow-up of patients. In this review, we discuss various ECG parameters other than ST segment changes, particularly with regard to their in-hospital prognostic importance. Our review not only evaluates the prognostic segments and parts of ECG, but also highlights the need for an integrative approach in a big data to re-assess the parameters reported to predict in-hospital prognosis. The evolving importance of artificial intelligence in evaluation of ECG, particularly with regard to predicting prognosis, and the potential integration with other patient characteristics to predict prognosis, are discussed.

Hayıroğlu Mert İlker, Lakhani Ishan, Tse Gary, Çınar Tufan, Çinier Göksel, Tekkeşin Ahmet İlker

2020-Jun-11

Electrocardiography, In-hospital mortality, P wave, QRS morphology, QT interval, T wave

Pathology Pathology

Instance Segmentation for Whole Slide Imaging: End-to-End or Detect-Then-Segment

ArXiv Preprint

Automatic instance segmentation of glomeruli within kidney Whole Slide Imaging (WSI) is essential for clinical research in renal pathology. In computer vision, the end-to-end instance segmentation methods (e.g., Mask-RCNN) have shown their advantages relative to detect-then-segment approaches by performing complementary detection and segmentation tasks simultaneously. As a result, the end-to-end Mask-RCNN approach has been the de facto standard method in recent glomerular segmentation studies, where downsampling and patch-based techniques are used to properly evaluate the high resolution images from WSI (e.g., >10,000x10,000 pixels on 40x). However, in high resolution WSI, a single glomerulus itself can be more than 1,000x1,000 pixels in original resolution which yields significant information loss when the corresponding features maps are downsampled via the Mask-RCNN pipeline. In this paper, we assess if the end-to-end instance segmentation framework is optimal for high-resolution WSI objects by comparing Mask-RCNN with our proposed detect-then-segment framework. Beyond such a comparison, we also comprehensively evaluate the performance of our detect-then-segment pipeline through: 1) two of the most prevalent segmentation backbones (U-Net and DeepLab_v3); 2) six different image resolutions (from 512x512 to 28x28); and 3) two different color spaces (RGB and LAB). Our detect-then-segment pipeline, with the DeepLab_v3 segmentation framework operating on previously detected glomeruli of 512x512 resolution, achieved a 0.953 dice similarity coefficient (DSC), compared with a 0.902 DSC from the end-to-end Mask-RCNN pipeline. Further, we found that neither RGB nor LAB color spaces yield better performance when compared against each other in the context of a detect-then-segment framework. Detect-then-segment pipeline achieved better segmentation performance compared with End-to-end method.

Aadarsh Jha, Haichun Yang, Ruining Deng, Meghan E. Kapp, Agnes B. Fogo, Yuankai Huo

2020-07-07

Surgery Surgery

Baseline Analysis of Patients Presenting for Surgical Review of Anterior Cruciate Ligament Rupture Reveals Heterogeneity in Patient-Reported Outcome Measures.

In The journal of knee surgery

Despite the establishment of successful surgical techniques and rehabilitation protocols for anterior cruciate ligament (ACL) reconstruction, published return to sport rates are less than satisfactory. This has led orthopaedic surgeons and researchers to develop more robust patient selection methods, and investigate prognostic patient characteristics. No previous studies have integrated baseline characteristics and responses to patient-reported outcome measures (PROMs) of patients with ACL rupture presenting for surgical review. Patients electing to undergo ACL reconstruction under the care of a single orthopaedic surgeon at a metropolitan public hospital were enrolled in a clinical quality registry. Patients completed Veterans RAND 12-item Health Survey (VR-12) Physical Component Summary and Mental Component Summary scores, Tegner activity scale, and International Knee Documentation Committee (IKDC) questionnaires at presentation. Total scores were extracted from the electronic registry, and a machine learning approach (k-means) was used to identify subgroups based on similarity of questionnaire responses. The average scores in each cluster were compared using analysis of variance (ANOVA; Kruskal-Wallis) and nominal logistic regression was performed to determine relationships between cluster membership and patient age, gender, body mass index (BMI), and injury-to-examination delay. A sample of 107 patients with primary ACL rupture were extracted, with 97 (91%) available for analysis with complete datasets. Four clusters were identified with distinct patterns of PROMs responses. These ranged from lowest (Cluster 1) to highest scores for VR-12 and IKDC (Cluster 4). In particular, Cluster 4 returned median scores within 6 points of the patient acceptable symptom state for the IKDC score for ACL reconstruction (70.1, interquartile range: 59-78). Significant (p < 0.05) differences in PROMs between clusters were observed using ANOVA, with variance explained ranging from 40 to 69%. However, cluster membership was not significantly associated with patient age, gender, BMI, or injury-to-examination delay. Patients electing to undergo ACL reconstruction do not conform to a homogenous group but represent a spectrum of knee function, general physical and mental health, and preinjury activity levels, which may not lend itself to uniform treatment and rehabilitation protocols. The factors driving these distinct responses to PROMs remain unknown but are unrelated to common demographic variables.

Ting Chee Han, Scholes Corey, Zbrojkiewicz David, Bell Christopher

2020-Jul-06

Radiology Radiology

Hematopoietic stem-cell senescence and myocardial repair - Coronary artery disease genotype/phenotype analysis of post-MI myocardial regeneration response induced by CABG/CD133+ bone marrow hematopoietic stem cell treatment in RCT PERFECT Phase 3.

In EBioMedicine

BACKGROUND : Bone marrow stem cell clonal dysfunction by somatic mutation is suspected to affect post-infarction myocardial regeneration after coronary bypass surgery (CABG).

METHODS : Transcriptome and variant expression analysis was studied in the phase 3 PERFECT trial post myocardial infarction CABG and CD133+ bone marrow derived hematopoetic stem cells showing difference in left ventricular ejection fraction (∆LVEF) myocardial regeneration Responders (n=14; ∆LVEF +16% day 180/0) and Non-responders (n=9; ∆LVEF -1.1% day 180/0). Subsequently, the findings have been validated in an independent patient cohort (n=14) as well as in two preclinical mouse models investigating SH2B3/LNK antisense or knockout deficient conditions.

FINDINGS : 1. Clinical: R differed from NR in a total of 161 genes in differential expression (n=23, q<0•05) and 872 genes in coexpression analysis (n=23, q<0•05). Machine Learning clustering analysis revealed distinct RvsNR preoperative gene-expression signatures in peripheral blood acorrelated to SH2B3 (p<0.05). Mutation analysis revealed increased specific variants in RvsNR. (R: 48 genes; NR: 224 genes). 2. Preclinical:SH2B3/LNK-silenced hematopoietic stem cell (HSC) clones displayed significant overgrowth of myeloid and immune cells in bone marrow, peripheral blood, and tissue at day 160 after competitive bone-marrow transplantation into mice. SH2B3/LNK-/- mice demonstrated enhanced cardiac repair through augmenting the kinetics of bone marrow-derived endothelial progenitor cells, increased capillary density in ischemic myocardium, and reduced left ventricular fibrosis with preserved cardiac function. 3.

VALIDATION : Evaluation analysis in 14 additional patients revealed 85% RvsNR (12/14 patients) prediction accuracy for the identified biomarker signature.

INTERPRETATION : Myocardial repair is affected by HSC gene response and somatic mutation. Machine Learning can be utilized to identify and predict pathological HSC response.

FUNDING : German Ministry of Research and Education (BMBF): Reference and Translation Center for Cardiac Stem Cell Therapy - FKZ0312138A and FKZ031L0106C, German Ministry of Research and Education (BMBF): Collaborative research center - DFG:SFB738 and Center of Excellence - DFG:EC-REBIRTH), European Social Fonds: ESF/IV-WM-B34-0011/08, ESF/IV-WM-B34-0030/10, and Miltenyi Biotec GmbH, Bergisch-Gladbach, Germany. Japanese Ministry of Health : Health and Labour Sciences Research Grant (H14-trans-001, H17-trans-002) TRIAL REGISTRATION: ClinicalTrials.gov NCT00950274.

Wolfien Markus, Klatt Denise, Salybekov Amankeldi A, Ii Masaaki, Komatsu-Horii Miki, Gaebel Ralf, Philippou-Massier Julia, Schrinner Eric, Akimaru Hiroshi, Akimaru Erika, David Robert, Garbade Jens, Gummert Jan, Haverich Axel, Hennig Holger, Iwasaki Hiroto, Kaminski Alexander, Kawamoto Atsuhiko, Klopsch Christian, Kowallick Johannes T, Krebs Stefan, Nesteruk Julia, Reichenspurner Hermann, Ritter Christian, Stamm Christof, Tani-Yokoyama Ayumi, Blum Helmut, Wolkenhauer Olaf, Schambach Axel, Asahara Takayuki, Steinhoff Gustav

2020-Jul-03

Angiogenesis induction, CABG, CHIP, Cardiac stem cell therapy, Clonal hematopoiesis of indeterminate pathology, Coronary bypass surgery, Machine learning, Myocardial regeneration, Post myocardial infarction heart failure, SH2B3

Radiology Radiology

Prognostic nomogram in patients with metastatic adenoid cystic carcinoma of the salivary glands.

In European journal of cancer (Oxford, England : 1990)

BACKGROUND : Distant metastases in adenoid cystic carcinoma (ACC) are common. There is no consensus on the management of metastatic disease because no therapeutic approach has demonstrated improvement in overall survival (OS) and because of prolonged life expectancy. The aim of this study is to build and validate a prognostic nomogram for metastatic ACC patients.

METHODS : The study end-point was OS, measured from the date of first metastatic presentation to death/last follow-up. A retrospective analysis including metastatic ACC patients was performed to build the prognostic nomogram at the INT (Milan, Italy). The model was validated on an independent cohort of patients with similar characteristics treated at Leuven (Belgium). Outcome data and covariates were modelled by resorting to a random forest method. This machine-learning approach was used to guide and benchmark the subsequent use of more conventional modelling methods. Cox model performance was assessed in terms of discrimination (Harrell's c-index).

RESULTS : Two hundred ninety-eight patients with metastatic ACC (testing set 259 INT, validation set 39 Leuven) were studied. Akaike Information Criterion-based backward selection yielded a 5-factor model showing a bias-corrected c-index of 0.730. Five independent prognostic factors were found: gender, disease-free interval and presence of lung, liver or bone metastases. Nomogram discrimination in the validation series was c = 0.701.

CONCLUSION : This retrospective analysis allowed the building of an externally validated prognostic nomogram. This tool might help clinicians to discriminate patients requiring prompt management from who can benefit from a 'watchful waiting'. In addition, the nomogram might be useful to stratify patients in clinical trials.

Cavalieri Stefano, Mariani Luigi, Vander Poorten Vincent, Van Breda Laure, Cau Maria C, Lo Vullo Salvatore, Alfieri Salvatore, Resteghini Carlo, Bergamini Cristiana, Orlandi Ester, Calareso Giuseppina, Clement Paul, Hauben Esther, Platini Francesca, Bossi Paolo, Licitra Lisa, Locati Laura D

2020-Jul-03

Adenoid cystic carcinoma, Nomogram, Prognosis, Salivary gland cancer

General General

A machine learning approach to select features important to stroke prognosis.

In Computational biology and chemistry

Ischemic stroke is a common neurological disorder, and is still the principal cause of serious long-term disability in the world. Selection of features related to stroke prognosis is highly valuable for effective intervention and treatment. In this study, an integrated machine learning approach was used to select the features as prognosis factors of stroke on The International Stroke Trial (IST) dataset. We considered the common problems of feature selection and prediction in medical datasets. Firstly, the importance of features was ranked by the Shapiro-Wilk algorithm and the Pearson correlations between features were analyzed. Then, we used Recursive Feature Elimination with Cross-Validation (RFECV), which incorporated linear SVC, Random-Forest-Classifier, Extra-Trees-Classifier, AdaBoost-Classifier, and Multinomial-Naïve-Bayes-Classifier as estimator respectively, to select robust features. Furthermore, the importance of selected features was determined by Random-Forest-Classifier and Shapiro-Wilk algorithm. Finally, twenty-three selected features were used by SVC, MLP, Random-Forest, and AdaBoost-Classifier to predict the RVISINF (Infarct visible on CT) of acute stroke on IST dataset. It was suggested that the selected features could be used to infer the long-term prognosis of acute stroke at a high accuracy, and it also could be used to extract factors related to RVISINF, which is associated with large artery occlusion (LAO) in ischemic stroke patient.

Fang Gang, Liu Wenbin, Wang Lixin

2020-Jun-23

Feature Selection, IST, Ischemic stroke, Machine learning

General General

The role of chronobiology in drug-resistance epilepsy: The potential use of a variability and chronotherapy-based individualized platform for improving the response to anti-seizure drugs.

In Seizure

Despite progress in the development of anti-seizure drugs, drug-resistant epilepsy (DRE) occurs in a third of patients. DRE is associated with poor quality of life and increased risk of sudden, unexplained death. The autonomic nervous system and chronobiology play a role in DRE. In the present paper, we provide a narrative review the mechanisms that underlie DRE and characterize some of the autonomic- and chronotherapy-associated parameters that contribute to the degree of response to therapy. Variability describes the functions of many biological systems, which are dynamic and continuously change over time. These systems are required for responses to continuing internal and external triggers, in order to maintain homeostasis and normal function. Both intra- and inter-subject variability in biological systems have been described. We present a platform, which comprises a personalized-based machine learning closed loop algorithm built on epilepsy-related signatures, autonomic signals, and chronotherapy, as a means for overcoming DRE, improving the response, and reducing the toxicity of current therapies.

Potruch Assaf, Khoury Salim T, Ilan Yaron

2020-Jul-02

Anti-seizure, Autonomic nervous system, Drug resistant epilepsy, Variability

General General

⋆This paper has been handled by associate editor Tony Sze.The application of novel connected vehicles emulated data on real-time crash potential prediction for arterials.

In Accident; analysis and prevention

Real-time crash potential prediction could provide valuable information for Active Traffic Management Systems. Fixed infrastructure-based vehicle detection devices were widely used in the previous studies to obtain different types of data for crash potential prediction. However, it was difficult to obtain data in large range through these devices due to the costs of installation and maintenance. This paper introduced a novel connected vehicle (CV) emulated data for real-time crash potential prediction. Different from the fixed devices' data, CV emulated data have high flexibility and can be obtained continuously with relatively low cost. Crash and CV emulated data were collected from two urban arterials in Orlando, USA. Crash data were archived by the Signal for Analytics system (S4A), while the CV emulated data were obtained through the data collection API with a high frequency. Different data cleaning and preparation techniques were implemented, while various speed-related variables were generated from the CV emulated data. A Long Short-term Memory (LSTM) neural network was trained to predict the crash potential in the next 5-10 min. The results from the model illustrated the feasibility of using a novel CV emulated data to predict real-time crash potential. The average and 50th percentile speed were the two most important variables for the crash potential prediction. In addition, the proposed LSTM outperformed Bayesian logistics regression and XGBoost in terms of sensitivity, Area under Curve (AUC), and false alarm rate. With the rapid development of the connected vehicle systems, the results from this paper can be extended to other types of vehicles and data, which can significantly enhance traffic safety.

Li Pei, Abdel-Aty Mohamed, Cai Qing, Yuan Cheng

2020-Jul-03

Connected vehicle emulated data, Deep learning, Real-time crash potential prediction, Urban arterials

General General

Quantifying drug-induced structural toxicity in hepatocytes and cardiomyocytes derived from hiPSCs using a deep learning method.

In Journal of pharmacological and toxicological methods

Cardiac and hepatic toxicity result from induced disruption of the functioning of cardiomyocytes and hepatocytes, respectively, which is tightly related to the organization of their subcellular structures. Cellular structure can be analyzed from microscopy imaging data. However, subtle or complex structural changes that are not easily perceived may be missed by conventional image-analysis techniques. Here we report the evaluation of PhenoTox, an image-based deep-learning method of quantifying drug-induced structural changes using human hepatocytes and cardiomyocytes derived from human induced pluripotent stem cells. We assessed the ability of the deep learning method to detect variations in the organization of cellular structures from images of fixed or live cells. We also evaluated the power and sensitivity of the method for detecting toxic effects of drugs by conducting a set of experiments using known toxicants and other methods of screening for cytotoxic effects. Moreover, we used PhenoTox to characterize the effects of tamoxifen and doxorubicin-which cause liver toxicity-on hepatocytes. PhenoTox revealed differences related to loss of cytochrome P450 3A4 activity, for which it showed greater sensitivity than a caspase 3/7 assay. Finally, PhenoTox detected structural toxicity in cardiomyocytes, which was correlated with contractility defects induced by doxorubicin, erlotinib, and sorafenib. Taken together, the results demonstrated that PhenoTox can capture the subtle morphological changes that are early signs of toxicity in both hepatocytes and cardiomyocytes.

Maddah Mahnaz, Mandegar Mohammad A, Dame Keri, Grafton Francis, Loewke Kevin, Ribeiro Alexandre J S

2020-Jul-03

Artificial intelligence, Cardiotoxicity, Deep learning, Drug safety, Hepatotoxicity, High-content microscopy, Human iPSC, In vitro, Structural toxicity

General General

Prediction of hERG potassium channel blockage using ensemble learning methods and molecular fingerprints.

In Toxicology letters ; h5-index 49.0

The human ether-a-go-go-related gene (hERG) encodes a tetrameric potassium channel called Kv11.1. This channel can be blocked by certain drugs, which leads to long QT syndrome, causing cardiotoxicity. This is a significant problem during drug development. Using computer models to predict compound cardiotoxicity during the early stages of drug design will help to solve this problem. In this study, we used a dataset of 1,865 compounds exhibiting known hERG inhibitory activities as a training set. Thirty cardiotoxicity classification models were established using three machine learning algorithms based on molecular fingerprints and molecular descriptors. Through using these models as the base classifier, a new cardiotoxicity classification model with better predictive performance was developed using ensemble learning method. The accuracy of the best base classifier, which was generated using the XGBoost method with molecular descriptors, was 84.8%, and the area under the receiver-operating characteristic curve (AUC) was 0.876 in the five fold cross-validation. However, all of the ensemble models that we developed had higher predictive performance than the base classifiers in the five fold cross-validation. The best predictive performance was achieved by the Ensemble-Top7 model, with accuracy of 84.9% and AUC of 0.887. We also tested the ensemble model using external validation data and achieved accuracy of 85.0% and AUC of 0.786. Furthermore, we identified several hERG-related substructures, which provide valuable information for designing drug candidates.

Liu Miao, Zhang Li, Li Shimeng, Yang Tianzhou, Liu Lili, Zhao Jian, Liu Hongsheng

2020-Jul-03

Ensemble model, Machine learning, Molecular descriptor, Molecular fingerprint, hERG

General General

Storing, combining and analysing turkey experimental data in the Big Data era.

In Animal : an international journal of animal bioscience

With the increasing availability of large amounts of data in the livestock domain, we face the challenge to store, combine and analyse these data efficiently. With this study, we explored the use of a data lake for storing and analysing data to improve scalability and interoperability. Data originated from a 2-day animal experiment in which the gait score of approximately 200 turkeys was determined through visual inspection by an expert. Additionally, inertial measurement units (IMUs), a 3D-video camera and a force plate (FP) were installed to explore the effectiveness of these sensors in automating the visual gait scoring. We deployed a data lake using the IMU and FP data of a single day of that animal experiment. This encompasses data from 84 turkeys for which we preprocessed by performing an 'extract, transform and load' (ETL-) procedure. To test scalability of the ETL-procedure, we simulated increasing volumes of the available data from this animal experiment and computed the 'wall time' (elapsed real time) for converting FP data into comma-separated files and storing these files. With a simulated data set of 30 000 turkeys, the wall time reduced from 1 h to less than 15 min, when 12 cores were used compared to 1 core. This demonstrated the ETL-procedure to be scalable. Subsequently, a machine learning (ML) pipeline was developed to test the potential of a data lake to automatically distinguish between two classses, that is, very bad gait scores v. other scores. In conclusion, we have set up a dedicated customized data lake, loaded data and developed a prediction model via the creation of an ML pipeline. A data lake appears to be a useful tool to face the challenge of storing, combining and analysing increasing volumes of data of varying nature in an effective manner.

Schokker D, Athanasiadis I N, Visser B, Veerkamp R F, Kamphuis C

2020-Jun-22

data lake, extract, transform and load, machine learning, scalability, sensors

Pathology Pathology

Divide-and-Rule: Self-Supervised Learning for Survival Analysis in Colorectal Cancer

ArXiv Preprint

With the long-term rapid increase in incidences of colorectal cancer (CRC), there is an urgent clinical need to improve risk stratification. The conventional pathology report is usually limited to only a few histopathological features. However, most of the tumor microenvironments used to describe patterns of aggressive tumor behavior are ignored. In this work, we aim to learn histopathological patterns within cancerous tissue regions that can be used to improve prognostic stratification for colorectal cancer. To do so, we propose a self-supervised learning method that jointly learns a representation of tissue regions as well as a metric of the clustering to obtain their underlying patterns. These histopathological patterns are then used to represent the interaction between complex tissues and predict clinical outcomes directly. We furthermore show that the proposed approach can benefit from linear predictors to avoid overfitting in patient outcomes predictions. To this end, we introduce a new well-characterized clinicopathological dataset, including a retrospective collective of 374 patients, with their survival time and treatment information. Histomorphological clusters obtained by our method are evaluated by training survival models. The experimental results demonstrate statistically significant patient stratification, and our approach outperformed the state-of-the-art deep clustering methods.

Christian Abbet, Inti Zlobec, Behzad Bozorgtabar, Jean-Philippe Thiran

2020-07-07

Cardiology Cardiology

Artificial Intelligence and Machine Learning in Arrhythmias and Cardiac Electrophysiology.

In Circulation. Arrhythmia and electrophysiology

Artificial intelligence (AI) and machine learning (ML) in medicine are currently areas of intense exploration, showing potential to automate human tasks and even perform tasks beyond human capabilities. Literacy and understanding of AI/ML methods are becoming increasingly important to researchers and clinicians. The first objective of this review is to provide the novice reader with literacy of AI/ML methods and provide a foundation for how one might conduct an ML study. We provide a technical overview of some of the most commonly used terms, techniques, and challenges in AI/ML studies, with reference to recent studies in cardiac electrophysiology to illustrate key points. The second objective of this review is to use examples from recent literature to discuss how AI and ML are changing clinical practice and research in cardiac electrophysiology, with emphasis on disease detection and diagnosis, prediction of patient outcomes, and novel characterization of disease. The final objective is to highlight important considerations and challenges for appropriate validation, adoption, and deployment of AI technologies into clinical practice.

Feeny Albert K, Chung Mina K, Madabhushi Anant, Attia Zachi I, Cikes Maja, Firouznia Marjan, Friedman Paul A, Kalscheur Matthew M, Kapa Suraj, Narayan Sanjiv M, Noseworthy Peter A, Passman Rod S, Perez Marco V, Peters Nicholas S, Piccini Jonathan P, Tarakji Khaldoun G, Thomas Suma A, Trayanova Natalia A, Turakhia Mintu P, Wang Paul J

2020-Jul-06

artificial intelligence, machine learning

General General

Ultraflexible and Mechanically Strong Double-Layered Aramid Nanofiber-Ti3C2Tx MXene/Silver Nanowire Nanocomposite Papers for High-Performance Electromagnetic Interference Shielding.

In ACS nano ; h5-index 203.0

High-performance electromagnetic interference (EMI) shielding materials with ultraflexibility, outstanding mechanical properties and superior EMI shielding performances are highly desirable for modern integrated electronic and telecommunication systems in areas such as aerospace, military, artificial intelligence, smart and wearable electronics. Herein, ultraflexible and mechanically strong aramid nanofiber-Ti3C2Tx MXene/Ag nanowire (ANF-MXene/AgNW) nanocomposite papers with double-layered structures are fabricated via the facile two-step vacuum assisted filtration (TVAF) followed by hot-pressing approach. The resultant double-layered nanocomposite papers with a low MXene/AgNW content of 20 wt% exhibit excellent electrical conductivity of 922.0 S·cm-1, outstanding mechanical properties with tensile strength of 235.9 MPa and fracture strain of 24.8%, superior EMI shielding effectiveness (EMI SE) of 48.1 dB and high EMI SE/t of 10688.9 dB·cm-1, benefiting from the highly efficient double-layered structures, high-performance ANF substrate and extensive hydrogen bonding interactions. Particularly, the nanocomposite papers show the maximum electrical conductivity of 3725.6 S·cm-1 and EMI shielding effectiveness (EMI SE) of ~80 dB at the MXene/AgNW content of 80 wt% with an absorption-dominant shielding mechanism owing to the massive ohmic losses in highly conductive MXene/AgNW layer, multiple internal reflections between Ti3C2Tx MXene nanosheets and polarization relaxation of localized defects and abundant terminal groups. Compared with the homogeneously-blended ones, the double-layered nanocomposite papers possess greater advantages in electrical, mechanical and EMI shielding performances. Moreover, the multifunctional double-layered nanocomposite papers exhibit excellent thermal management performances such as high Joule heating temperature at low supplied voltages, rapid response time, sufficient heating stability and reliability. The results indicate that the double-layered nanocomposite papers have excellent potential for high-performance EMI shielding and thermal management applications in aerospace, military and artificial intelligence, smart and wearable electronics.

Ma Zhonglei, Kang Songlei, Ma Jianzhong, Shao Liang, Zhang Yali, Liu Chao, Wei Ajing, Xiang Xiaolian, Wei Linfeng, Gu Junwei

2020-Jul-06

General General

KRAS, NRAS, and BRAF mutation prevalence, clinicopathological association, and their application in a predictive model in Mexican patients with metastatic colorectal cancer: A retrospective cohort study.

In PloS one ; h5-index 176.0

Mutations in KRAS, NRAS, and BRAF (RAS/BRAF) genes are the main predictive biomarkers for the response to anti-EGFR monoclonal antibodies (MAbs) targeted therapy in metastatic colorectal cancer (mCRC). This retrospective study aimed to report the mutational status prevalence of these genes, explore their possible associations with clinicopathological features, and build and validate a predictive model. To achieve these objectives, 500 mCRC Mexican patients were screened for clinically relevant mutations in RAS/BRAF genes. Fifty-two percent of these specimens harbored clinically relevant mutations in at least one screened gene. Among these, 86% had a mutation in KRAS, 7% in NRAS, 6% in BRAF, and 2% in both NRAS and BRAF. Only tumor location in the proximal colon exhibited a significant correlation with KRAS and BRAF mutational status (p-value = 0.0414 and 0.0065, respectively). Further t-SNE analyses were made to 191 specimens to reveal patterns among patients with clinical parameters and KRAS mutational status. Then, directed by the results from classical statistical tests and t-SNE analysis, neural network models utilized entity embeddings to learn patterns and build predictive models using a minimal number of trainable parameters. This study could be the first step in the prediction for RAS/BRAF mutational status from tumoral features and could lead the way to a more detailed and more diverse dataset that could benefit from machine learning methods.

Sanchez-Ibarra Hector Eduardo, Jiang Xianli, Gallegos-Gonzalez Elena Yareli, Cavazos-González Adriana Carolina, Chen Yenho, Morcos Faruck, Barrera-Saldaña Hugo Alberto

2020

General General

Aptamer based proteomic pilot study reveals a urine signature indicative of pediatric urinary tract infections.

In PloS one ; h5-index 176.0

OBJECTIVE : Current urinary tract infection (UTI) diagnostic strategies that rely on leukocyte esterase have limited accuracy. We performed an aptamer-based proteomics pilot study to identify urine protein levels that could differentiate a culture proven UTI from culture negative samples, regardless of pyuria status.

METHODS : We analyzed urine from 16 children with UTIs, 8 children with culture negative pyuria and 8 children with negative urine culture and no pyuria. The urine levels of 1,310 proteins were quantified using the Somascan™ platform and normalized to urine creatinine. Machine learning with support vector machine (SVM)-based feature selection was performed to determine the combination of urine biomarkers that optimized diagnostic accuracy.

RESULTS : Eight candidate urine protein biomarkers met filtering criteria. B-cell lymphoma protein, C-X-C motif chemokine 6, C-X-C motif chemokine 13, cathepsin S, heat shock 70kDA protein 1A, mitogen activated protein kinase, protein E7 HPV18 and transgelin. AUCs ranged from 0.91 to 0.95. The best prediction was achieved by the SVMs with radial basis function kernel.

CONCLUSIONS : Biomarkers panel can be identified by the emerging technologies of aptamer-based proteomics and machine learning that offer the potential to increase UTI diagnostic accuracy, thereby limiting unneeded antibiotics.

Dong Liang, Watson Joshua, Cao Sha, Arregui Samuel, Saxena Vijay, Ketz John, Awol Abduselam K, Cohen Daniel M, Caterino Jeffrey M, Hains David S, Schwaderer Andrew L

2020

Ophthalmology Ophthalmology

A deep learning approach to predict visual field using optical coherence tomography.

In PloS one ; h5-index 176.0

We developed a deep learning architecture based on Inception V3 to predict visual field using optical coherence tomography (OCT) imaging and evaluated its performance. Two OCT images, macular ganglion cell-inner plexiform layer (mGCIPL) and peripapillary retinal nerve fibre layer (pRNFL) thicknesses, were acquired and combined. A convolutional neural network architecture was constructed to predict visual field using this combined OCT image. The root mean square error (RMSE) between the actual and predicted visual fields was calculated to evaluate the performance. Globally (the entire visual field area), the RMSE for all patients was 4.79 ± 2.56 dB, with 3.27 dB and 5.27 dB for the normal and glaucoma groups, respectively. The RMSE of the macular region (4.40 dB) was higher than that of the peripheral region (4.29 dB) for all subjects. In normal subjects, the RMSE of the macular region (2.45 dB) was significantly lower than that of the peripheral region (3.11 dB), whereas in glaucoma subjects, the RMSE was higher (5.62 dB versus 5.03 dB, respectively). The deep learning method effectively predicted the visual field 24-2 using the combined OCT image. This method may help clinicians determine visual fields, particularly for patients who are unable to undergo a physical visual field exam.

Park Keunheung, Kim Jinmi, Lee Jiwoong

2020

General General

Accurate Prediction of Coronary Heart Disease for Patients With Hypertension From Electronic Health Records With Big Data and Machine-Learning Methods: Model Development and Performance Evaluation.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Predictions of cardiovascular disease risks based on health records have long attracted broad research interests. Despite extensive efforts, the prediction accuracy has remained unsatisfactory. This raises the question as to whether the data insufficiency, statistical and machine-learning methods, or intrinsic noise have hindered the performance of previous approaches, and how these issues can be alleviated.

OBJECTIVE : Based on a large population of patients with hypertension in Shenzhen, China, we aimed to establish a high-precision coronary heart disease (CHD) prediction model through big data and machine-learning.

METHODS : Data from a large cohort of 42,676 patients with hypertension, including 20,156 patients with CHD onset, were investigated from electronic health records (EHRs) 1-3 years prior to CHD onset (for CHD-positive cases) or during a disease-free follow-up period of more than 3 years (for CHD-negative cases). The population was divided evenly into independent training and test datasets. Various machine-learning methods were adopted on the training set to achieve high-accuracy prediction models and the results were compared with traditional statistical methods and well-known risk scales. Comparison analyses were performed to investigate the effects of training sample size, factor sets, and modeling approaches on the prediction performance.

RESULTS : An ensemble method, XGBoost, achieved high accuracy in predicting 3-year CHD onset for the independent test dataset with an area under the receiver operating characteristic curve (AUC) value of 0.943. Comparison analysis showed that nonlinear models (K-nearest neighbor AUC 0.908, random forest AUC 0.938) outperform linear models (logistic regression AUC 0.865) on the same datasets, and machine-learning methods significantly surpassed traditional risk scales or fixed models (eg, Framingham cardiovascular disease risk models). Further analyses revealed that using time-dependent features obtained from multiple records, including both statistical variables and changing-trend variables, helped to improve the performance compared to using only static features. Subpopulation analysis showed that the impact of feature design had a more significant effect on model accuracy than the population size. Marginal effect analysis showed that both traditional and EHR factors exhibited highly nonlinear characteristics with respect to the risk scores.

CONCLUSIONS : We demonstrated that accurate risk prediction of CHD from EHRs is possible given a sufficiently large population of training data. Sophisticated machine-learning methods played an important role in tackling the heterogeneity and nonlinear nature of disease prediction. Moreover, accumulated EHR data over multiple time points provided additional features that were valuable for risk prediction. Our study highlights the importance of accumulating big data from EHRs for accurate disease predictions.

Du Zhenzhen, Yang Yujie, Zheng Jing, Li Qi, Lin Denan, Li Ye, Fan Jianping, Cheng Wen, Chen Xie-Hui, Cai Yunpeng

2020-Jul-06

coronary heart disease, electronic health records, hypertension, machine learning, predictive algorithms

Public Health Public Health

Neural Network-Based Clinical Prediction System for Identifying the Clinical Effects of Saffron (Crocus sativus L) Supplement Therapy on Allergic Asthma: Model Evaluation Study.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Asthma is commonly associated with chronic airway inflammation and is the underlying cause of over a million deaths each year. Crocus sativus L, commonly known as saffron, when used in the form of traditional medicines, has demonstrated anti-inflammatory effects which may be beneficial to individuals with asthma.

OBJECTIVE : The objective of this study was to develop a clinical prediction system using an artificial neural network to detect the effects of C sativus L supplements on patients with allergic asthma.

METHODS : A genetic algorithm-modified neural network predictor system was developed to detect the level of effectiveness of C sativus L using features extracted from the clinical, immunologic, hematologic, and demographic information of patients with asthma. The study included data from men (n=40) and women (n=40) individuals with mild or moderate allergic asthma from 18 to 65 years of age. The aim of the model was to estimate and predict the level of effect of C sativus L supplements on each asthma risk factor and to predict the level of alleviation in patients with asthma. A genetic algorithm was used to extract input features for the clinical prediction system to improve its predictive performance. Moreover, an optimization model was developed for the artificial neural network component that classifies the patients with asthma using C sativus L supplement therapy.

RESULTS : The best overall performance of the clinical prediction system was an accuracy greater than 99% for training and testing data. The genetic algorithm-modified neural network predicted the level of effect with high accuracy for anti-heat shock protein (anti-HSP), high sensitivity C-reactive protein (hs-CRP), forced expiratory volume in the first second of expiration (FEV1), forced vital capacity (FVC), the ratio of FEV1/FVC, and forced expiratory flow (FEF25%-75%) for testing data (anti-HSP: 96.5%; hs-CRP: 98.9%; FEV1: 98.1%; FVC: 97.5%; FEV1/FVC ratio: 97%; and FEF25%-75%: 96.7%, respectively).

CONCLUSIONS : The clinical prediction system developed in this study was effective in predicting the effect of C sativus L supplements on patients with allergic asthma. This clinical prediction system may help clinicians to identify early on which clinical factors in asthma will improve over the course of treatment and, in doing so, help clinicians to develop effective treatment plans for patients with asthma.

Hosseini Seyed Ahmad, Jamshidnezhad Amir, Zilaee Marzie, Fouladi Dehaghi Behzad, Mohammadi Abbas, Hosseini Seyed Mohsen

2020-Jul-06

Crocus sativus L, asthma, clinical predictor system, machine learning, neural networks, saffron, supplement therapy

General General

Chemical Class Prediction of Unknown Biomolecules Using Ion Mobility-Mass Spectrometry and Machine Learning: SIFTER.

In Analytical chemistry

This work presents a machine learning algorithm referred to as the Supervised Inference of Feature Taxonomy from Ensemble Randomization (SIFTER), which supports the identification of features derived from untargeted ion mobility-mass spectrometry (IM-MS) experiments. SIFTER utilizes random forest machine learning on three analytical measurements derived from IM-MS (collision cross section, CCS), mass-to-charge (m/z), and mass defect (Δm)) to classify unknown features into a taxonomy of chemical kingdom, super class, class, and subclass. Each of these classifications is assigned a calculated probability as well as alternate classifications with associated probabilities. After optimization, SIFTER was tested against a set of molecules not used in the training set. The average success rate in classifying all four taxonomy categories correctly was found to be >99%. Analysis of molecular features detected from a complex biological matrix and not used in the training set yielded a lower success rate where all four categories were correctly predicted for ~80% of the compounds. This decline in performance is in part due to incompleteness of the training set across all potential taxonomic categories, but also resulting from a nearest neighbor bias in the random forest algorithm. Ongoing efforts are focused on improving the class prediction accuracy of SIFTER through expansion of empirical datasets used for training as well as improvements to the core algorithm.

Picache Jaqueline A, May Jody C, McLean John A

2020-Jul-06

Surgery Surgery

Effect of the surgical approach on survival outcomes in patients undergoing radical hysterectomy for cervical cancer: A real-world multicenter study of a large Chinese cohort from 2006 to 2017.

In Cancer medicine

OBJECTIVE : To compare survival outcomes of minimally invasive surgery (MIS) and laparotomy in early-stage cervical cancer (CC) patients.

METHODS : A multicenter retrospective cohort study was conducted with International Federation of Gynecology and Obstetrics (FIGO, 2009) stage IA1 (lymphovascular invasion)-IIA1 CC patients undergoing MIS or laparotomy at four tertiary hospitals from 2006 to 2017. Propensity score matching and weighting and multivariate Cox regression analyses were performed. Survival was compared in various matched cohorts and subgroups.

RESULTS : Three thousand two hundred and fifty-two patients (2439 MIS and 813 laparotomy) were included after matching. (1) The 2- and 5-year recurrence-free survival (RFS) (2-year, hazard ratio [HR], 1.81;95% confidence interval [CI], 1.09-3.0; 5-year, HR, 2.17; 95% CI, 1.21-3.89) or overall survival (OS) (2-year, HR, 1.87; 95% CI, 1.03-3.40; 5-year, HR, 2.57; 95% CI, 1.29-5.10) were significantly worse for MIS in patients with stage I B1, but not the cohort overall (2-year RFS, HR, 1.04; 95% CI, 0.76-1.42; 2-year OS, HR, 0.99; 95% CI, 0.70-1.41; 5-year RFS, HR, 1.12; 95% CI, 0.76-1.65; 5-year OS, HR, 1.20; 95% CI, 0.79-1.83) or other stages (2) In a subgroup analysis, MIS exhibited poorer survival in many population subsets, even in patients with less risk factors, such as patients with squamous cell carcinoma, negative for parametrial involvement, with negative surgical margins, negative for lymph node metastasis, and deep stromal invasion < 2/3. (3) In the cohort treated with (2172, 54%) or without adjuvant treatment (1814, 46%), MIS showed worse RFS than laparotomy in patients treated without adjuvant treatment, whereas no differences in RFS and OS were observed in adjuvant-treatment cohort. (4) Inadequate surgeon proficiency strongly correlated with poor RFS and OS in patients receiving MIS compared with laparotomy.

CONCLUSIONS : MIS exhibited poorer survival outcomes than laparotomy group in many population subsets, even in low-risk subgroups. Therefore, laparotomy should be the recommended approach for CC patients.

Guo Chenyan, Tang Xiaoyan, Meng Yan, Zhang Ying, Zhang Xuyin, Guo Jingjing, Lei Xiaohong, Qiu Junjun, Hua Keqin

2020-Jul-06

cervical cancer, laparotomy, matching, minimally invasive surgery, radical hysterectomy, survival outcome

General General

Supervised Machine Learning for Semi-Quantification of Extracellular DNA in Glomerulonephritis.

In Journal of visualized experiments : JoVE

Glomerular cell death is a pathological feature of myeloperoxidase anti neutrophil cytoplasmic antibody associated vasculitis (MPO-AAV). Extracellular deoxyribonucleic acid (ecDNA) is released during different forms of cell death including apoptosis, necrosis, necroptosis, neutrophil extracellular traps (NETs) and pyroptosis. Measurement of this cell death is time consuming with several different biomarkers required to identify the different biochemical forms of cell death. Measurement of ecDNA is generally conducted in serum and urine as a surrogate for renal damage, not in the actual target organ where the pathological injury occurs. The current difficulty in investigating ecDNA in the kidney is the lack of methods for formalin fixed paraffin embedded tissue (FFPE) both experimentally and in archived human kidney biopsies. This protocol provides a summary of the steps required to stain for ecDNA in FFPE tissue (both human and murine), quench autofluorescence and measure the ecDNA in the resulting images using a machine learning tool from the publicly available open source ImageJ plugin trainable Weka segmentation. Trainable Weka segmentation is applied to ecDNA within the glomeruli where the program learns to classify ecDNA. This classifier is applied to subsequent acquired kidney images, reducing the need for manual annotations of each individual image. The adaptability of the trainable Weka segmentation is demonstrated further in kidney tissue from experimental murine anti-MPO glomerulonephritis (GN), to identify NETs and ecMPO, common pathological contributors to anti-MPO GN. This method provides objective analysis of ecDNA in kidney tissue that demonstrates clearly the efficacy in which the trainable Weka segmentation program can distinguish ecDNA between healthy normal kidney tissue and diseased kidney tissue. This protocol can easily be adapted to identify ecDNA, NETs and ecMPO in other organs.

O’Sullivan Kim Maree, Creed Sarah, Gan Poh-Yi, Holdsworth Stephen R

2020-Jun-18

Cardiology Cardiology

A primer on artificial intelligence for the paediatric cardiologist.

In Cardiology in the young

The combination of pediatric cardiology being both a perceptual and a cognitive subspecialty demands a complex decision-making model which makes artificial intelligence a particularly attractive technology with great potential. The prototypical artificial intelligence system would autonomously impute patient data into a collaborative database that stores, syncs, interprets and ultimately classifies the patient's profile to specific disease phenotypes to compare against a large aggregate of shared peer health data and outcomes, the current medical body of literature and ongoing trials to offer morbidity and mortality prediction, drug therapy options targeted to each patient's genetic profile, tailored surgical plans and recommendations for timing of sequential imaging. The focus of this review paper is to offer a primer on artificial intelligence and paediatric cardiology by briefly discussing the history of artificial intelligence in medicine, modern and future applications in adult and paediatric cardiology across selected concentrations, and current barriers to implementation of these technologies.

Gearhart Addison, Gaffar Sharib, Chang Anthony C

2020-Jun-22

Cardiology, artificial intelligence, paediatrics

Surgery Surgery

Learning and Reasoning with the Graph Structure Representation in Robotic Surgery

ArXiv Preprint

Learning to infer graph representations and performing spatial reasoning in a complex surgical environment can play a vital role in surgical scene understanding in robotic surgery. For this purpose, we develop an approach to generate the scene graph and predict surgical interactions between instruments and surgical region of interest (ROI) during robot-assisted surgery. We design an attention link function and integrate with a graph parsing network to recognize the surgical interactions. To embed each node with corresponding neighbouring node features, we further incorporate SageConv into the network. The scene graph generation and active edge classification mostly depend on the embedding or feature extraction of node and edge features from complex image representation. Here, we empirically demonstrate the feature extraction methods by employing label smoothing weighted loss. Smoothing the hard label can avoid the over-confident prediction of the model and enhances the feature representation learned by the penultimate layer. To obtain the graph scene label, we annotate the bounding box and the instrument-ROI interactions on the robotic scene segmentation challenge 2018 dataset with an experienced clinical expert in robotic surgery and employ it to evaluate our propositions.

Mobarakol Islam, Lalithkumar Seenivasan, Lim Chwee Ming, Hongliang Ren

2020-07-07

General General

Artificial Intelligence and Machine Learning in Computational Nanotoxicology: Unlocking and Empowering Nanomedicine.

In Advanced healthcare materials

Advances in nanomedicine, coupled with novel methods of creating advanced materials at the nanoscale, have opened new perspectives for the development of healthcare and medical products. Special attention must be paid toward safe design approaches for nanomaterial-based products. Recently, artificial intelligence (AI) and machine learning (ML) gifted the computational tool for enhancing and improving the simulation and modeling process for nanotoxicology and nanotherapeutics. In particular, the correlation of in vitro generated pharmacokinetics and pharmacodynamics to in vivo application scenarios is an important step toward the development of safe nanomedicinal products. This review portrays how in vitro and in vivo datasets are used in in silico models to unlock and empower nanomedicine. Physiologically based pharmacokinetic (PBPK) modeling and absorption, distribution, metabolism, and excretion (ADME)-based in silico methods along with dosimetry models as a focus area for nanomedicine are mainly described. The computational OMICS, colloidal particle determination, and algorithms to establish dosimetry for inhalation toxicology, and quantitative structure-activity relationships at nanoscale (nano-QSAR) are revisited. The challenges and opportunities facing the blind spots in nanotoxicology in this computationally dominated era are highlighted as the future to accelerate nanomedicine clinical translation.

Singh Ajay Vikram, Ansari Mohammad Hasan Dad, Rosenkranz Daniel, Maharjan Romi Singh, Kriegel Fabian L, Gandhi Kaustubh, Kanase Anurag, Singh Rishabh, Laux Peter, Luch Andreas

2020-Jul-06

AI, machine learning, nanomedicines, nanotoxicology, physiologically based pharmacokinetic modeling

General General

Differences in substrate use linked to divergent carbon flow during litter decomposition.

In FEMS microbiology ecology

Discovering widespread microbial processes that create variation in soil carbon (C) cycling within ecosystems may improve soil C modeling. Toward this end, we screened 206 soil communities decomposing plant litter in a common garden microcosm environment and examined features linked to divergent patterns of C flow. C flow was measured as carbon dioxide (CO2) and dissolved organic carbon (DOC) from 44-days of litter decomposition. Two large groups of microbial communities representing 'high' and 'low' DOC phenotypes from original soil and 44-day microcosm samples were down-selected for fungal and bacterial profiling. Metatranscriptomes were also sequenced from a smaller subset of communities in each group. The two groups exhibited differences in average rate of CO2 production, demonstrating that the divergent patterns of C flow arose from innate functional constraints on C metabolism, not a time-dependent artefact. To infer functional constraints, we identified features-traits at the organism, pathway, or gene level-linked to the high and low DOC phenotypes using RNA-Seq approaches and machine learning approaches. Substrate use differed across the high and low DOC phenotypes. Additional features suggested that divergent patterns of C flow may be driven in part by differences in organism interactions that affect DOC abundance directly or indirectly by controlling community structure.

Albright Michaeline B N, Thompson Jaron, Kroeger Marie E, Johansen Renee, Ulrich Danielle E M, Gallegos-Graves La Verne, Munsky Brian, Dunbar John

2020-Jul-06

bacteriovores, carbon cycling, carbon dioxide, dissolved organic carbon, effect traits, fungivores, machine learning, metatranscriptome, microbiome, oligotrophs, physiology, soil

General General

Two particle-picking procedures for filamentous proteins: SPHIRE-crYOLO filament mode and SPHIRE-STRIPER.

In Acta crystallographica. Section D, Structural biology

Structure determination of filamentous molecular complexes involves the selection of filaments from cryo-EM micrographs. The automatic selection of helical specimens is particularly difficult, and thus many challenging samples with issues such as contamination or aggregation are still manually picked. Here, two approaches for selecting filamentous complexes are presented: one uses a trained deep neural network to identify the filaments and is integrated in SPHIRE-crYOLO, while the other, called SPHIRE-STRIPER, is based on a classical line-detection approach. The advantage of the crYOLO-based procedure is that it performs accurately on very challenging data sets and selects filaments with high accuracy. Although STRIPER is less precise, the user benefits from less intervention, since in contrast to crYOLO, STRIPER does not require training. The performance of both procedures on Tobacco mosaic virus and filamentous F-actin data sets is described to demonstrate the robustness of each method.

Wagner Thorsten, Lusnig Luca, Pospich Sabrina, Stabrin Markus, Schönfeld Fabian, Raunser Stefan

2020-Jul-01

SPHIRE-STRIPER, SPHIRE-crYOLO, cryo-EM, deep learning, filaments, particle picking

General General

Evaluation of a Novel Noninvasive Blood Glucose Monitor Based on Mid-Infrared Quantum Cascade Laser Technology and Photothermal Detection.

In Journal of diabetes science and technology ; h5-index 38.0

BACKGROUND : A prototype of a noninvasive glucometer combining skin excitation by a mid-infrared quantum cascade laser with photothermal detection was evaluated in glucose correlation tests including 100 volunteers (41 people with diabetes and 59 healthy people).

METHODS : Invasive reference measurements using a clinical glucometer and noninvasive measurements at a finger of the volunteer were simultaneously recorded in five-minute intervals starting from fasting glucose values for healthy subjects (low glucose values for diabetes patients) over a two-hour period. A glucose range from >50 to <350 mg/dL was covered. Machine learning algorithms were used to predict glucose values from the photothermal spectra. Data were analyzed for the average percent disagreement of the noninvasive measurements with the clinical reference measurement and visualized in consensus error grids.

RESULTS : 98.8% (full data set) and 99.1% (improved algorithm) of glucose results were within Zones A and B of the grid, indicating the highest accuracy level. Less than 1% of the data were in Zone C, and none in Zone D or E. The mean and median percent differences between the invasive as a reference and the noninvasive method were 12.1% and 6.5%, respectively, for the full data set, and 11.3% and 6.4% with the improved algorithm.

CONCLUSIONS : Our results demonstrate that noninvasive blood glucose analysis combining mid-infrared spectroscopy and photothermal detection is feasible and comparable in accuracy with minimally invasive glucometers and finger pricking devices which use test strips. As a next step, a handheld version of the present device for diabetes patients is being developed.

Lubinski Thorsten, Plotka Bartosz, Janik Sergius, Canini Luca, Mäntele Werner

2020-Jul-05

mid-IR spectroscopy, noninvasive blood glucose analysis, photothermal detection, quantum cascade laser (QCL)

Public Health Public Health

The Correlation of Comorbidities on the Mortality in Patients with COVID-19: an Observational Study Based on the Korean National Health Insurance Big Data.

In Journal of Korean medical science

BACKGROUND : Mortality of coronavirus disease 2019 (COVID-19) is a major concern for quarantine departments in all countries. This is because the mortality of infectious diseases determines the basic policy stance of measures to prevent infectious diseases. Early screening of high-risk groups and taking action are the basics of disease management. This study examined the correlation of comorbidities on the mortality of patients with COVID-19.

METHODS : We constructed epidemiologic characteristics and medical history database based on the Korean National Health Insurance Service Big Data and linked COVID-19 registry data of Korea Centers for Disease Control & Prevention (KCDC) for this emergent observational cohort study. A total of 9,148 patients with confirmed COVID-19 were included. Mortalities by sex, age, district, income level and all range of comorbidities classified by International Classification of Diseases-10 based 298 categories were estimated.

RESULTS : There were 3,556 male confirmed cases, 67 deaths, and crude death rate (CDR) of 1.88%. There were 5,592 females, 63 deaths, and CDR of 1.13%. The most confirmed cases were 1,352 patients between the ages of 20 to 24, followed by 25 to 29. As a result of multivariate logistic regression analysis that adjusted epidemiologic factors to view the risk of death, the odds ratio of death would be hemorrhagic conditions and other diseases of blood and blood-forming organs 3.88-fold (95% confidence interval [CI], 1.52-9.88), heart failure 3.17-fold (95% CI, 1.88-5.34), renal failure 3.07-fold (95% CI, 1.43-6.61), prostate malignant neoplasm 2.88-fold (95% CI, 1.01-8.22), acute myocardial infarction 2.38-fold (95% CI, 1.03-5.49), diabetes was 1.82-fold (95% CI, 1.25-2.67), and other ischemic heart disease 1.71-fold (95% CI, 1.09-2.66).

CONCLUSION : We hope that this study could provide information on high risk groups for preemptive interventions. In the future, if a vaccine for COVID-19 is developed, it is expected that this study will be the basic data for recommending immunization by selecting those with chronic disease that had high risk of death, as recommended target diseases for vaccination.

Kim Dong Wook, Byeon Kyeong Hyang, Kim Jaiyong, Cho Kyu Dong, Lee Nakyoung

2020-Jul-06

COVID-19, Chronic Diseases, Comorbidities, Mortality Risk

Surgery Surgery

Predicting Outcomes of Pelvic Exenteration Using Machine Learning.

In Colorectal disease : the official journal of the Association of Coloproctology of Great Britain and Ireland

AIM : We aim to compare machine learning (ML) with neural network performance in predicting R0 resection (R0), length of stay >14 days (LOS), major complication rates at 30 days post-operatively (COMP) and survival greater than one year (SURV) for patients having pelvic exenteration for locally advanced and recurrent rectal cancer.

METHOD : A deep learning computer was built, and programming environment established. The PelvEx Collaborative database was used which contains anonymized data on patients who underwent pelvic exenteration for locally advanced or locally recurrent colorectal cancer between 2004 and 2014. Logistic Regression (LR), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were trained. 20% of the data was used as a test set for calculating prediction accuracy for R0, LOS, COMP and SURV. Model performance was measured by plotting Receiver Operating Characteristic (ROC) Curves and calculating the Area Under ROC (AUROC).

RESULTS : ML models and ANNs were trained on 1,147 cases. The AUROC for all outcome predictions ranged from 0.608 to 0.793 indicating modest to moderate predictive ability. The models performed best at predicting length of stay >14 days with an AUROC of 0.793 using preoperative and operative data. Visualised LR Model weights indicate varying impact of variables on the outcome in question.

CONCLUSION : This paper highlights the potential for predictive modelling of large international databases. Current data allow moderate predictive ability of both complex ANNs and more classic methods.

Dudurych Ivan, Kelly Michael E

2020-Jul-05

Artificial Intelligence (AI), Artificial Neural Network (ANN), Colorectal Surgery, Machine Learning (ML), Pelvic Exenteration

Radiology Radiology

Sex classification using long-range temporal dependence of resting-state functional MRI time series.

In Human brain mapping

A thorough understanding of sex differences that exist in the brains of healthy individuals is crucial for the study of neurological illnesses that exhibit phenotypic differences between males and females. Here we evaluate sex differences in regional temporal dependence of resting-state brain activity in 195 adult male-female pairs strictly matched for total grey matter volume from the Human Connectome Project. We find that males have more persistent temporal dependence in regions within temporal, parietal, and occipital cortices. Machine learning algorithms trained on regional temporal dependence measures achieve sex classification accuracies up to 81%. Regions with the strongest feature importance in the sex classification task included cerebellum, amygdala, and frontal and occipital cortices. Secondarily, we show that even after strict matching of total gray matter volume, significant volumetric sex differences persist; males have larger absolute cerebella, hippocampi, parahippocampi, thalami, caudates, and amygdalae while females have larger absolute cingulates, precunei, and frontal and parietal cortices. Sex classification based on regional volume achieves accuracies up to 85%, highlighting the importance of strict volume-matching when studying brain-based sex differences. Differential patterns in regional temporal dependence between the sexes identifies a potential neurobiological substrate or environmental effect underlying sex differences in functional brain activation patterns.

Dhamala Elvisha, Jamison Keith W, Sabuncu Mert R, Kuceyeski Amy

2020-Jul-06

classification, functional MRI, machine learning, neuroimaging, sex differences, temporal dependence

Cardiology Cardiology

Efficacy assessment of ticagrelor versus clopidogrel in Chinese patients with acute coronary syndrome undergoing percutaneous coronary intervention by data mining and machine-learning decision tree approaches.

In Journal of clinical pharmacy and therapeutics

WHAT IS KNOWN AND OBJECTIVE : Although ticagrelor has been well-known to improve clinical outcomes in patients undergoing percutaneous coronary intervention (PCI), and its effectiveness and safety have not been well evaluated in Chinese patients. This study aimed to evaluate the effectiveness and safety of ticagrelor in Chinese patients. In order to find potential effect modifiers on the drug effects, a decision tree method was performed to detect interactions between treatment and patient characteristics in an automatic and systematic manner.

METHODS : This retrospective study included acute coronary syndrome (ACS) patients who underwent PCI and received either ticagrelor (N = 250) or clopidogrel (N = 291) while hospitalized between August 2014 and August 2015. After propensity score matching, Kaplan-Meier analysis was used to study the event-free survival against major adverse cardiovascular events (MACE, primary efficacy outcome, defined as the composite of cardiac death, non-fatal myocardial infarction [MI], stroke, restenosis and target vessel revascularization [TVR]), re-hospitalization, the need for urgent re-PCI (secondary efficacy outcome) and bleeding events (safety outcome) within 12 months of the PCI date. To search for effect modifiers of the two antiplatelet therapies, a machine-learning decision tree algorithm was conducted to predict re-hospitalization status.

RESULTS : After propensity score matching (N = 442), ticagrelor and clopidogrel had no significant difference in MACE, re-hospitalization and bleeding. The decision tree analysis showed that the number of diseased vessels modulated the effect of ticagrelor and clopidogrel on re-hospitalization rates. In single-vessel disease (SVD) patients, ticagrelor was associated with lower hazards than clopidogrel for all efficacy outcomes: MACE (HR = 0.190, 95% CI: 0.042-0.866), re-hospitalization (HR = 0.296, 95% CI: 0.108-0.808), urgent re-PCI (HR = 0.249, 95% CI: 0.069-0.895), bleeding (HR = 1.006, 95% CI: 0.063-16.129). However, in multi-vessel disease (MVD) patients, the two treatments did not show significant difference.

WHAT IS NEW AND CONCLUSION : In the general patient population, there was no significant difference between ticagrelor and clopidogrel on the hazard of MACE. However, ticagrelor achieved a better effectiveness than clopidogrel in patients with SVD. This pilot study provides scientific basis to call for a large-scale prospective study in this population.

Xue Ying, Hu Ziheng, Jing Yankang, Wu Hongyi, Li Xiaoye, Wang Junmei, Seybert Amy, Xie Xiangqun, Lv Qianzhou

2020-Jul-06

Chinese population, clopidogrel, decision tree, percutaneous coronary intervention, ticagrelor

Radiology Radiology

DeepDicomSort: An Automatic Sorting Algorithm for Brain Magnetic Resonance Imaging Data.

In Neuroinformatics

With the increasing size of datasets used in medical imaging research, the need for automated data curation is arising. One important data curation task is the structured organization of a dataset for preserving integrity and ensuring reusability. Therefore, we investigated whether this data organization step can be automated. To this end, we designed a convolutional neural network (CNN) that automatically recognizes eight different brain magnetic resonance imaging (MRI) scan types based on visual appearance. Thus, our method is unaffected by inconsistent or missing scan metadata. It can recognize pre-contrast T1-weighted (T1w),post-contrast T1-weighted (T1wC), T2-weighted (T2w), proton density-weighted (PDw) and derived maps (e.g. apparent diffusion coefficient and cerebral blood flow). In a first experiment,we used scans of subjects with brain tumors: 11065 scans of 719 subjects for training, and 2369 scans of 192 subjects for testing. The CNN achieved an overall accuracy of 98.7%. In a second experiment, we trained the CNN on all 13434 scans from the first experiment and tested it on 7227 scans of 1318 Alzheimer's subjects. Here, the CNN achieved an overall accuracy of 98.5%. In conclusion, our method can accurately predict scan type, and can quickly and automatically sort a brain MRI dataset virtually without the need for manual verification. In this way, our method can assist with properly organizing a dataset, which maximizes the shareability and integrity of the data.

van der Voort Sebastian R, Smits Marion, Klein Stefan

2020-Jul-05

BIDS, Brain imaging, DICOM, Data curation, Machine learning, Magnetic resonance imaging

General General

Importance of phase enhancement for machine learning classification of solid renal masses using texture analysis features at multi-phasic CT.

In Abdominal radiology (New York)

OBJECTIVE : To compare machine learning (ML) of texture analysis (TA) features for classification of solid renal masses on non-contrast-enhanced CT (NCCT), corticomedullary (CM) and nephrographic (NG) phase contrast-enhanced (CE) CT.

MATERIALS AND METHODS : With IRB approval, we retrospectively identified 177 consecutive solid renal masses (116 renal cell carcinoma [RCC]; 51 clear cell [cc], 40 papillary, 25 chromophobe and 61 benign tumors; 49 oncocytomas and 12 fat-poor angiomyolipomas) with renal protocol CT between 2012 and 2017. Tumors were independently segmented by two blinded radiologists. Twenty-five 2-dimensional TA features were extracted from each phase. Diagnostic accuracy for 1) RCC versus benign tumor and 2) cc-RCC versus other tumor was assessed using XGBoost.

RESULTS : ML of texture analysis features on different phases achieved mean area under the ROC curve (AUC [SD]), sensitivity/specificity for 1) RCC vs benign = 0.70(0.19), 96%/32% on CM-CECT and 0.71(0.14), 83%/58% on NG-CECT and; 2) cc-RCC vs other = 0.77(0.12), 49%/90% on CM-CECT and 0.71(0.16), 22%/94% on NG-CECT. There was no difference in AUC comparing CECT to NCCT (p = 0.058-0.54) and no improvement when combining data across all three phases compared single-phase assessment (p = 0.39-0.68) for either outcome. AUCs decreased when ML models were trained with one phase and tested on a different phase for both outcomes (RCC;p = 0.045-0.106, cc-RCC; < 0.001).

CONCLUSION : Accuracy of machine learning classification of renal masses using texture analysis features did not depend on phase; however, models trained using one phase performed worse when tested on another phase particularly when associating NCCT and CECT. These findings have implications for large registries which use varying CT protocols to study renal masses.

Schieda Nicola, Nguyen Kathleen, Thornhill Rebecca E, McInnes Matthew D F, Wu Mark, James Nick

2020-Jul-05

Computed tomography, Machine learning, Renal cell carcinoma, Texture analysis

General General

Machine learning for pattern detection in cochlear implant FDA adverse event reports.

In Cochlear implants international ; h5-index 17.0

Importance: Medical device performance and safety databases can be analyzed for patterns and novel opportunities for improving patient safety and/or device design. Objective: The objective of this analysis was to use supervised machine learning to explore patterns in reported adverse events involving cochlear implants. Design: Adverse event reports for the top three CI manufacturers were acquired for the analysis. Four supervised machine learning algorithms were used to predict which adverse event description pattern corresponded with a specific cochlear implant manufacturer and adverse event type. Setting: U.S. government public database. Participants: Adult and pediatric cochlear patients. Exposure: Surgical placement of a cochlear implant. Main Outcome Measure: Classification prediction accuracy (% correct predictions). Results: Most adverse events involved patient injury (n = 16,736), followed by device malfunction (n = 10,760), and death (n = 16). The random forest, linear SVC, naïve Bayes and logistic algorithms were able to predict the specific CI manufacturer based on the adverse event narrative with an average accuracy of 74.8%, 86.0%, 88.5% and 88.6%, respectively. Conclusions & relevance: Using supervised machine learning algorithms, our classification models were able to predict the CI manufacturer and event type with high accuracy based on patterns in adverse event text descriptions.

Crowson Matthew G, Hamour Amr, Lin Vincent, Chen Joseph M, Chan Timothy C Y

2020-Jul-05

Adverse events, Cochlear implants, Machine learning

Public Health Public Health

The Correlation of Comorbidities on the Mortality in Patients with COVID-19: an Observational Study Based on the Korean National Health Insurance Big Data.

In Journal of Korean medical science

BACKGROUND : Mortality of coronavirus disease 2019 (COVID-19) is a major concern for quarantine departments in all countries. This is because the mortality of infectious diseases determines the basic policy stance of measures to prevent infectious diseases. Early screening of high-risk groups and taking action are the basics of disease management. This study examined the correlation of comorbidities on the mortality of patients with COVID-19.

METHODS : We constructed epidemiologic characteristics and medical history database based on the Korean National Health Insurance Service Big Data and linked COVID-19 registry data of Korea Centers for Disease Control & Prevention (KCDC) for this emergent observational cohort study. A total of 9,148 patients with confirmed COVID-19 were included. Mortalities by sex, age, district, income level and all range of comorbidities classified by International Classification of Diseases-10 based 298 categories were estimated.

RESULTS : There were 3,556 male confirmed cases, 67 deaths, and crude death rate (CDR) of 1.88%. There were 5,592 females, 63 deaths, and CDR of 1.13%. The most confirmed cases were 1,352 patients between the ages of 20 to 24, followed by 25 to 29. As a result of multivariate logistic regression analysis that adjusted epidemiologic factors to view the risk of death, the odds ratio of death would be hemorrhagic conditions and other diseases of blood and blood-forming organs 3.88-fold (95% confidence interval [CI], 1.52-9.88), heart failure 3.17-fold (95% CI, 1.88-5.34), renal failure 3.07-fold (95% CI, 1.43-6.61), prostate malignant neoplasm 2.88-fold (95% CI, 1.01-8.22), acute myocardial infarction 2.38-fold (95% CI, 1.03-5.49), diabetes was 1.82-fold (95% CI, 1.25-2.67), and other ischemic heart disease 1.71-fold (95% CI, 1.09-2.66).

CONCLUSION : We hope that this study could provide information on high risk groups for preemptive interventions. In the future, if a vaccine for COVID-19 is developed, it is expected that this study will be the basic data for recommending immunization by selecting those with chronic disease that had high risk of death, as recommended target diseases for vaccination.

Kim Dong Wook, Byeon Kyeong Hyang, Kim Jaiyong, Cho Kyu Dong, Lee Nakyoung

2020-Jul-06

COVID-19, Chronic Diseases, Comorbidities, Mortality Risk

General General

Predictive Maintenance for Edge-Based Sensor Networks: A Deep Reinforcement Learning Approach

ArXiv Preprint

Failure of mission-critical equipment interrupts production and results in monetary loss. The risk of unplanned equipment downtime can be minimized through Predictive Maintenance of revenue generating assets to ensure optimal performance and safe operation of equipment. However, the increased sensorization of the equipment generates a data deluge, and existing machine-learning based predictive model alone becomes inadequate for timely equipment condition predictions. In this paper, a model-free Deep Reinforcement Learning algorithm is proposed for predictive equipment maintenance from an equipment-based sensor network context. Within each equipment, a sensor device aggregates raw sensor data, and the equipment health status is analyzed for anomalous events. Unlike traditional black-box regression models, the proposed algorithm self-learns an optimal maintenance policy and provides actionable recommendation for each equipment. Our experimental results demonstrate the potential for broader range of equipment maintenance applications as an automatic learning framework.

Kevin Shen Hoong Ong, Dusit Niyato, Chau Yuen

2020-07-07

Surgery Surgery

Breath Metabolomics Provides an Accurate and Noninvasive Approach for Screening Cirrhosis, Primary, and Secondary Liver Tumors.

In Hepatology communications

Hepatocellular carcinoma (HCC) and secondary liver tumors, such as colorectal cancer liver metastases are significant contributors to the overall burden of cancer-related morality. Current biomarkers, such as alpha-fetoprotein (AFP) for HCC, result in too many false negatives, necessitating noninvasive approaches with improved sensitivity. Volatile organic compounds (VOCs) detected in the breath of patients can provide valuable insight into disease processes and can differentiate patients by disease status. Here, we investigate whether 22 VOCs from the breath of 296 patients can distinguish those with no liver disease (n = 54), cirrhosis (n = 30), HCC (n = 112), pulmonary hypertension (n = 49), or colorectal cancer liver metastases (n = 51). This work extends previous studies by evaluating the ability for VOC signatures to differentiate multiple diseases in a large cohort of patients. Pairwise disease comparisons demonstrated that most of the VOCs tested are present in significantly different relative abundances (false discovery rate P < 0.1), indicating broad impacts on the breath metabolome across diseases. A predictive model developed using random forest machine learning and cross validation classified patients with 85% classification accuracy and 75% balanced accuracy. Importantly, the model detected HCC with 73% sensitivity compared with 53% for AFP in the same cohort. An added value of this approach is that influential VOCs in the predictive model may provide insight into disease etiology. Acetaldehyde and acetone, both of which have roles in tumor promotion, were considered important VOCs for differentiating disease groups in the predictive model and were increased in patients with cirrhosis and HCC compared to patients with no liver disease (false discovery rate P < 0.1). Conclusion: The use of machine learning and breath VOCs shows promise as an approach to develop improved, noninvasive screening tools for chronic liver disease and primary and secondary liver tumors.

Miller-Atkins Galen, Acevedo-Moreno Lou-Anne, Grove David, Dweik Raed A, Tonelli Adriano R, Brown J Mark, Allende Daniela S, Aucejo Federico, Rotroff Daniel M

2020-Jul

General General

Comparison of Machine Learning Methods and Conventional Logistic Regressions for Predicting Gestational Diabetes Using Routine Clinical Data: A Retrospective Cohort Study.

In Journal of diabetes research ; h5-index 44.0

Background : Gestational diabetes mellitus (GDM) contributes to adverse pregnancy and birth outcomes. In recent decades, extensive research has been devoted to the early prediction of GDM by various methods. Machine learning methods are flexible prediction algorithms with potential advantages over conventional regression.

Objective : The purpose of this study was to use machine learning methods to predict GDM and compare their performance with that of logistic regressions.

Methods : We performed a retrospective, observational study including women who attended their routine first hospital visits during early pregnancy and had Down's syndrome screening at 16-20 gestational weeks in a tertiary maternity hospital in China from 2013.1.1 to 2017.12.31. A total of 22,242 singleton pregnancies were included, and 3182 (14.31%) women developed GDM. Candidate predictors included maternal demographic characteristics and medical history (maternal factors) and laboratory values at early pregnancy. The models were derived from the first 70% of the data and then validated with the next 30%. Variables were trained in different machine learning models and traditional logistic regression models. Eight common machine learning methods (GDBT, AdaBoost, LGB, Logistic, Vote, XGB, Decision Tree, and Random Forest) and two common regressions (stepwise logistic regression and logistic regression with RCS) were implemented to predict the occurrence of GDM. Models were compared on discrimination and calibration metrics.

Results : In the validation dataset, the machine learning and logistic regression models performed moderately (AUC 0.59-0.74). Overall, the GBDT model performed best (AUC 0.74, 95% CI 0.71-0.76) among the machine learning methods, with negligible differences between them. Fasting blood glucose, HbA1c, triglycerides, and BMI strongly contributed to GDM. A cutoff point for the predictive value at 0.3 in the GBDT model had a negative predictive value of 74.1% (95% CI 69.5%-78.2%) and a sensitivity of 90% (95% CI 88.0%-91.7%), and the cutoff point at 0.7 had a positive predictive value of 93.2% (95% CI 88.2%-96.1%) and a specificity of 99% (95% CI 98.2%-99.4%).

Conclusion : In this study, we found that several machine learning methods did not outperform logistic regression in predicting GDM. We developed a model with cutoff points for risk stratification of GDM.

Ye Yunzhen, Xiong Yu, Zhou Qiongjie, Wu Jiangnan, Li Xiaotian, Xiao Xirong

2020

General General

Alternative Polyadenylation Modification Patterns Reveal Essential Posttranscription Regulatory Mechanisms of Tumorigenesis in Multiple Tumor Types.

In BioMed research international ; h5-index 102.0

Among various risk factors for the initiation and progression of cancer, alternative polyadenylation (APA) is a remarkable endogenous contributor that directly triggers the malignant phenotype of cancer cells. APA affects biological processes at a transcriptional level in various ways. As such, APA can be involved in tumorigenesis through gene expression, protein subcellular localization, or transcription splicing pattern. The APA sites and status of different cancer types may have diverse modification patterns and regulatory mechanisms on transcripts. Potential APA sites were screened by applying several machine learning algorithms on a TCGA-APA dataset. First, a powerful feature selection method, minimum redundancy maximum relevancy, was applied on the dataset, resulting in a feature list. Then, the feature list was fed into the incremental feature selection, which incorporated the support vector machine as the classification algorithm, to extract key APA features and build a classifier. The classifier can classify cancer patients into cancer types with perfect performance. The key APA-modified genes had a potential prognosis ability because of their significant power in the survival analysis of TCGA pan-cancer data.

Li Min, Pan XiaoYong, Zeng Tao, Zhang Yu-Hang, Feng Kaiyan, Chen Lei, Huang Tao, Cai Yu-Dong

2020

General General

Prediction of Protein-Protein Interactions with Local Weight-Sharing Mechanism in Deep Learning.

In BioMed research international ; h5-index 102.0

Protein-protein interactions (PPIs) are important for almost all cellular processes, including metabolic cycles, DNA transcription and replication, and signaling cascades. The experimental methods for identifying PPIs are always time-consuming and expensive. Therefore, it is important to develop computational approaches for predicting PPIs. In this paper, an improved model is proposed to use a machine learning method in the study of protein-protein interactions. With the consideration of the factors affecting the prediction of the PPIs, a method of feature extraction and fusion is proposed to improve the variety of the features to be considered in the prediction. Besides, with the consideration of the effect affected by the different input order of the two proteins, we propose a "Y-type" Bi-RNN model and train the network by using a method which both needs backward and forward training. In order to insure the training time caused on the extra training either a backward one or a forward one, this paper proposes a weight-sharing policy to minimize the parameters in the training. The experimental results show that the proposed method can achieve an accuracy of 99.57%, recall of 99.36%, sensitivity of 99.76%, precision of 99.74%, MCC of 99.14%, and AUC of 99.56% under the benchmark dataset.

Yang Lei, Han Yukun, Zhang Huixue, Li Wenlong, Dai Yu

2020

General General

Texture Synthesis Based Thyroid Nodule Detection From Medical Ultrasound Images: Interpreting and Suppressing the Adversarial Effect of In-place Manual Annotation.

In Frontiers in bioengineering and biotechnology

Deep learning method have been offering promising solutions for medical image processing, but failing to understand what features in the input image are captured and whether certain artifacts are mistakenly included in the model, thus create crucial problems in generalizability of the model. We targeted a common issue of this kind caused by manual annotations appeared in medical image. These annotations are usually made by the doctors at the spot of medical interest and have adversarial effect on many computer vision AI tasks. We developed an inpainting algorithm to remove the annotations and recover the original images. Besides we applied variational information bottleneck method in order to filter out the unwanted features and enhance the robustness of the model. Our impaiting algorithm is extensively tested in object detection in thyroid ultrasound image data. The mAP (mean average precision, with IoU = 0.3) is 27% without the annotation removal. The mAP is 83% if manually removed the annotations using Photoshop and is enhanced to 90% using our inpainting algorithm. Our work can be utilized in future development and evaluation of artificial intelligence models based on medical images with defects.

Yao Siqiong, Yan Junchi, Wu Mingyu, Yang Xue, Zhang Weituo, Lu Hui, Qian Biyun

2020

deep learning, image inpainting, nodule detection, ultrasound medical image, variational information bottleneck

General General

A systematic review on spatial crime forecasting.

In Crime science

Background : Predictive policing and crime analytics with a spatiotemporal focus get increasing attention among a variety of scientific communities and are already being implemented as effective policing tools. The goal of this paper is to provide an overview and evaluation of the state of the art in spatial crime forecasting focusing on study design and technical aspects.

Methods : We follow the PRISMA guidelines for reporting this systematic literature review and we analyse 32 papers from 2000 to 2018 that were selected from 786 papers that entered the screening phase and a total of 193 papers that went through the eligibility phase. The eligibility phase included several criteria that were grouped into: (a) the publication type, (b) relevance to research scope, and (c) study characteristics.

Results : The most predominant type of forecasting inference is the hotspots (i.e. binary classification) method. Traditional machine learning methods were mostly used, but also kernel density estimation based approaches, and less frequently point process and deep learning approaches. The top measures of evaluation performance are the Prediction Accuracy, followed by the Prediction Accuracy Index, and the F1-Score. Finally, the most common validation approach was the train-test split while other approaches include the cross-validation, the leave one out, and the rolling horizon.

Limitations : Current studies often lack a clear reporting of study experiments, feature engineering procedures, and are using inconsistent terminology to address similar problems.

Conclusions : There is a remarkable growth in spatial crime forecasting studies as a result of interdisciplinary technical work done by scholars of various backgrounds. These studies address the societal need to understand and combat crime as well as the law enforcement interest in almost real-time prediction.

Implications : Although we identified several opportunities and strengths there are also some weaknesses and threats for which we provide suggestions. Future studies should not neglect the juxtaposition of (existing) algorithms, of which the number is constantly increasing (we enlisted 66). To allow comparison and reproducibility of studies we outline the need for a protocol or standardization of spatial forecasting approaches and suggest the reporting of a study's key data items.

Kounadi Ourania, Ristea Alina, Araujo Adelson, Leitner Michael

2020

Crime, Forecasting, Hotspots, Prediction, Predictive policing, Spatial analysis, Spatiotemporal

General General

Using computer vision on herbarium specimen images to discriminate among closely related horsetails (Equisetum).

In Applications in plant sciences

Premise : Equisetum is a distinctive vascular plant genus with 15 extant species worldwide. Species identification is complicated by morphological plasticity and frequent hybridization events, leading to a disproportionately high number of misidentified specimens. These may be correctly identified by applying appropriate computer vision tools.

Methods : We hypothesize that aerial stem nodes can provide enough information to distinguish among Equisetum hyemale, E. laevigatum, and E. ×ferrissii, the latter being a hybrid between the other two. An object detector was trained to find nodes on a given image and to distinguish E. hyemale nodes from those of E. laevigatum. A classifier then took statistics from the detection results and classified the given image into one of the three taxa. Both detector and classifier were trained and tested on expert manually annotated images.

Results : In our exploratory test set of 30 images, our detector/classifier combination identified all 10 E. laevigatum images correctly, as well as nine out of 10 E. hyemale images, and eight out of 10 E. ×ferrissii images, for a 90% classification accuracy.

Discussion : Our results support the notion that computer vision may help with the identification of herbarium specimens once enough manual annotations become available.

Pryer Kathleen M, Tomasi Carlo, Wang Xiaohan, Meineke Emily K, Windham Michael D

2020-Jun

Equisetales, deep learning, digitized herbarium specimens, ferns, horsetails, machine learning

General General

Maximizing human effort for analyzing scientific images: A case study using digitized herbarium sheets.

In Applications in plant sciences

Premise : Digitization and imaging of herbarium specimens provides essential historical phenotypic and phenological information about plants. However, the full use of these resources requires high-quality human annotations for downstream use. Here we provide guidance on the design and implementation of image annotation projects for botanical research.

Methods and Results : We used a novel gold-standard data set to test the accuracy of human phenological annotations of herbarium specimen images in two settings: structured, in-person sessions and an online, community-science platform. We examined how different factors influenced annotation accuracy and found that botanical expertise, academic career level, and time spent on annotations had little effect on accuracy. Rather, key factors included traits and taxa being scored, the annotation setting, and the individual scorer. In-person annotations were significantly more accurate than online annotations, but both generated relatively high-quality outputs. Gathering multiple, independent annotations for each image improved overall accuracy.

Conclusions : Our results provide a best-practices basis for using human effort to annotate images of plants. We show that scalable community science mechanisms can produce high-quality data, but care must be taken to choose tractable taxa and phenophases and to provide informative training material.

Brenskelle Laura, Guralnick Rob P, Denslow Michael, Stucky Brian J

2020-Jun

citizen science, herbarium specimens, image annotation, machine learning, phenology, specimen images

General General

Applying machine learning to investigate long-term insect-plant interactions preserved on digitized herbarium specimens.

In Applications in plant sciences

Premise : Despite the economic significance of insect damage to plants (i.e., herbivory), long-term data documenting changes in herbivory are limited. Millions of pressed plant specimens are now available online and can be used to collect big data on plant-insect interactions during the Anthropocene.

Methods : We initiated development of machine learning methods to automate extraction of herbivory data from herbarium specimens by training an insect damage detector and a damage type classifier on two distantly related plant species (Quercus bicolor and Onoclea sensibilis). We experimented with (1) classifying six types of herbivory and two control categories of undamaged leaf, and (2) detecting two of the damage categories for which several hundred annotations were available.

Results : Damage detection results were mixed, with a mean average precision of 45% in the simultaneous detection and classification of two types of damage. However, damage classification on hand-drawn boxes identified the correct type of herbivory 81.5% of the time in eight categories. The damage classifier was accurate for categories with 100 or more test samples.

Discussion : These tools are a promising first step for the automation of herbivory data collection. We describe ongoing efforts to increase the accuracy of these models, allowing researchers to extract similar data and apply them to biological hypotheses.

Meineke Emily K, Tomasi Carlo, Yuan Song, Pryer Kathleen M

2020-Jun

Anthropocene, climate change, herbarium, insects, machine learning, species interactions

General General

A new fine-grained method for automated visual analysis of herbarium specimens: A case study for phenological data extraction.

In Applications in plant sciences

Premise : Herbarium specimens represent an outstanding source of material with which to study plant phenological changes in response to climate change. The fine-scale phenological annotation of such specimens is nevertheless highly time consuming and requires substantial human investment and expertise, which are difficult to rapidly mobilize.

Methods : We trained and evaluated new deep learning models to automate the detection, segmentation, and classification of four reproductive structures of Streptanthus tortuosus (flower buds, flowers, immature fruits, and mature fruits). We used a training data set of 21 digitized herbarium sheets for which the position and outlines of 1036 reproductive structures were annotated manually. We adjusted the hyperparameters of a mask R-CNN (regional convolutional neural network) to this specific task and evaluated the resulting trained models for their ability to count reproductive structures and estimate their size.

Results : The main outcome of our study is that the performance of detection and segmentation can vary significantly with: (i) the type of annotations used for training, (ii) the type of reproductive structures, and (iii) the size of the reproductive structures. In the case of Streptanthus tortuosus, the method can provide quite accurate estimates (77.9% of cases) of the number of reproductive structures, which is better estimated for flowers than for immature fruits and buds. The size estimation results are also encouraging, showing a difference of only a few millimeters between the predicted and actual sizes of buds and flowers.

Discussion : This method has great potential for automating the analysis of reproductive structures in high-resolution images of herbarium sheets. Deeper investigations regarding the taxonomic scalability of this approach and its potential improvement will be conducted in future work.

Goëau Hervé, Mora-Fallas Adán, Champ Julien, Love Natalie L Rossington, Mazer Susan J, Mata-Montero Erick, Joly Alexis, Bonnet Pierre

2020-Jun

automated regional segmentation, deep learning, herbarium data, natural history collections, phenological stage annotation, phenophase, regional convolutional neural network, visual data classification

General General

Enhancing Gas Solubility in Nanopores: A Combined Study using Classical Density Functional Theory and Machine Learning.

In Langmuir : the ACS journal of surfaces and colloids

Geometrical confinement has a large impact on gas solubilities in nanoscale pores. This phenomenon is closely associated with heterogeneous catalysis, shale gas extraction, phase separation, etc. Whereas several experimental and theoretical studies have been conducted which provide meaningful insights into the over-solubility and under-solubility of different gases in confined solvents, the microscopic mechanism for regulating the gas solubility remains unclear. Here, we report a hybrid theoretical study for unraveling the regulation mechanism by combining classical density functional theory (CDFT) with machine learning (ML). Specifically, CDFT is employed to predict the solubility of argon in various solvents confined in nanopores of different types and pore widths, and these case studies then supply a valid training set to ML for further investigation. Finally, the dominant parameters that affect the gas solubility are identified, and a criterion is obtained to determine whether a confined gas-solvent system is enhance-beneficial or reduce-beneficial. Our findings provide theoretical guidance for predicting and regulating gas solubilities in nanopores. In addition, the hybrid method proposed in this work sets up a feasible platform for investigating complex interfacial systems with multiple controlling parameters.

Qiao Chongzhi, Yu Xiaochen, Song Xianyu, Zhao Teng, Xu Xiaofei, Zhao Shuangliang, Gubbins Keith E

2020-Jul-05

Public Health Public Health

Efficient GAN-based Chest Radiographs (CXR) augmentation to diagnose coronavirus disease pneumonia.

In International journal of medical sciences

Background: As 2019 ends coronavirus disease start expanding all over the world. It is highly transmissible disease that can affect respiratory tract and can leads to organ failure. In 2020 it is declared by world health organization as "Public health emergency of international concerns". The current situation of Covid-19 and chest related diseases have already gone through radical change with the advancements of image processing tools. There is no effective method which can accurately identify all chest related diseases and tackle the multiple class problems with reliable results. Method: There are many potentially impactful applications of Deep Learning to fighting the Covid-19 from Chest X-Ray/CT Images, however, most are still in their early stages due to lack of data sharing as it continues to inhibit overall progress in a variety of medical research problems. Based on COVID-19 radiographical changes in CT images, this work aims to detect the possibility of COVID-19 in the patient. This work provides a significant contribution in terms of Gan based synthetic data and four different types of deep learning- based models which provided state of the art comparable results. Results: A Deep Neural Network model provides a significant contribution in terms of detecting COVID-19 and provides effective analysis of chest related diseases with respect to age and gender. Our model achieves 89% accuracy in terms of Gan based synthetic data and four different types of deep learning- based models which provided state of the art comparable results. Conclusion: If the gap in identifying of all viral pneumonias is not filled with effective automation of chest disease detection the healthcare industry may have to bear unfavorable circumstances.

Albahli Saleh

2020

Chest diseases, Coronavirus, Deep learning, Inception-V3, ResNet-152, X-ray

General General

Learning Combined Set Covering and Traveling Salesman Problem

ArXiv Preprint

The Traveling Salesman Problem is one of the most intensively studied combinatorial optimization problems due both to its range of real-world applications and its computational complexity. When combined with the Set Covering Problem, it raises even more issues related to tractability and scalability. We study a combined Set Covering and Traveling Salesman problem and provide a mixed integer programming formulation to solve the problem. Motivated by applications where the optimal policy needs to be updated on a regular basis and repetitively solving this via MIP can be computationally expensive, we propose a machine learning approach to effectively deal with this problem by providing an opportunity to learn from historical optimal solutions that are derived from the MIP formulation. We also present a case study using the vaccine distribution chain of the World Health Organization, and provide numerical results with data derived from four countries in sub-Saharan Africa.

Yuwen Yang, Jayant Rajgopal

2020-07-07

General General

LeafMachine: Using machine learning to automate leaf trait extraction from digitized herbarium specimens.

In Applications in plant sciences

Premise : Obtaining phenotypic data from herbarium specimens can provide important insights into plant evolution and ecology but requires significant manual effort and time. Here, we present LeafMachine, an application designed to autonomously measure leaves from digitized herbarium specimens or leaf images using an ensemble of machine learning algorithms.

Methods and Results : We trained LeafMachine on 2685 randomly sampled specimens from 138 herbaria and evaluated its performance on specimens spanning 20 diverse families and varying widely in resolution, quality, and layout. LeafMachine successfully extracted at least one leaf measurement from 82.0% and 60.8% of high- and low-resolution images, respectively. Of the unmeasured specimens, only 0.9% and 2.1% of high- and low-resolution images, respectively, were visually judged to have measurable leaves.

Conclusions : This flexible autonomous tool has the potential to vastly increase available trait information from herbarium specimens, and inform a multitude of evolutionary and ecological studies.

Weaver William N, Ng Julienne, Laport Robert G

2020-Jun

LeafMachine, computer vision, herbarium digitization, leaf morphology, machine learning

General General

An algorithm competition for automatic species identification from herbarium specimens.

In Applications in plant sciences

Premise : Plant biodiversity is threatened, yet many species remain undescribed. It is estimated that >50% of undescribed species have already been collected and are awaiting discovery in herbaria. Robust automatic species identification algorithms using machine learning could accelerate species discovery.

Methods : To encourage the development of an automatic species identification algorithm, we submitted our Herbarium 2019 data set to the Fine-Grained Visual Categorization sub-competition (FGVC6) hosted on the Kaggle platform. We chose to focus on the flowering plant family Melastomataceae because we have a large collection of imaged herbarium specimens (46,469 specimens representing 683 species) and taxonomic expertise in the family. As is common for herbarium collections, some species in this data set are represented by few specimens and others by many.

Results : In less than three months, the FGVC6 Herbarium 2019 Challenge drew 22 teams who entered 254 models for Melastomataceae species identification. The four best algorithms identified species with >88% accuracy.

Discussion : The FGVC competitions provide a unique opportunity for computer vision and machine learning experts to address difficult species-recognition problems. The Herbarium 2019 Challenge brought together a novel combination of collections resources, taxonomic expertise, and collaboration between botanists and computer scientists.

Little Damon P, Tulig Melissa, Tan Kiat Chuan, Liu Yulong, Belongie Serge, Kaeser-Chen Christine, Michelangeli Fabián A, Panesar Kiran, Guha R V, Ambrose Barbara A

2020-Jun

FGVC, Kaggle, Melastomataceae, artificial intelligence, computer vision, herbarium specimen, machine learning

General General

Generating segmentation masks of herbarium specimens and a data set for training segmentation models using deep learning.

In Applications in plant sciences

Premise : Digitized images of herbarium specimens are highly diverse with many potential sources of visual noise and bias. The systematic removal of noise and minimization of bias must be achieved in order to generate biological insights based on the plants rather than the digitization and mounting practices involved. Here, we develop a workflow and data set of high-resolution image masks to segment plant tissues in herbarium specimen images and remove background pixels using deep learning.

Methods and Results : We generated 400 curated, high-resolution masks of ferns using a combination of automatic and manual tools for image manipulation. We used those images to train a U-Net-style deep learning model for image segmentation, achieving a final Sørensen-Dice coefficient of 0.96. The resulting model can automatically, efficiently, and accurately segment massive data sets of digitized herbarium specimens, particularly for ferns.

Conclusions : The application of deep learning in herbarium sciences requires transparent and systematic protocols for generating training data so that these labor-intensive resources can be generalized to other deep learning applications. Segmentation ground-truth masks are hard-won data, and we share these data and the model openly in the hopes of furthering model training and transfer learning opportunities for broader herbarium applications.

White Alexander E, Dikow Rebecca B, Baugh Makinnon, Jenkins Abigail, Frandsen Paul B

2020-Jun

U‐Net, deep learning, digitized herbarium specimens, ferns, machine learning, semantic segmentation

General General

GinJinn: An object-detection pipeline for automated feature extraction from herbarium specimens.

In Applications in plant sciences

Premise : The generation of morphological data in evolutionary, taxonomic, and ecological studies of plants using herbarium material has traditionally been a labor-intensive task. Recent progress in machine learning using deep artificial neural networks (deep learning) for image classification and object detection has facilitated the establishment of a pipeline for the automatic recognition and extraction of relevant structures in images of herbarium specimens.

Methods and Results : We implemented an extendable pipeline based on state-of-the-art deep-learning object-detection methods to collect leaf images from herbarium specimens of two species of the genus Leucanthemum. Using 183 specimens as the training data set, our pipeline extracted one or more intact leaves in 95% of the 61 test images.

Conclusions : We establish GinJinn as a deep-learning object-detection tool for the automatic recognition and extraction of individual leaves or other structures from herbarium specimens. Our pipeline offers greater flexibility and a lower entrance barrier than previous image-processing approaches based on hand-crafted features.

Ott Tankred, Palm Christoph, Vogt Robert, Oberprieler Christoph

2020-Jun

TensorFlow, deep learning, herbarium specimens, object detection, visual recognition

General General

Envisioning the expertise of the future.

In EFSA journal. European Food Safety Authority

Envisioning the expertise of the future in the field of food safety is challenging, as society, science and the way we work and live are changing and advancing faster than ever before. Future challenges call for multiple and multidimensional responses, some of which were addressed at EFSA's Third Scientific Conference. The participants indicated that risk assessment bodies involved in food safety such as EFSA must recognise that data, methods and expertise (i.e. people) are the three basic elements underlying risk assessments. These elements need constant consideration and adaptation to ensure preparedness for the future. Moreover, it should be recognised that knowledge and expertise are distributed throughout society and are thus not limited to scientists. Aspects considered during the breakout session included: (1) increased complexity, (2) the crowd workforce, (3) citizen science, (4) stakeholder engagement, (5) talent pools and (7) entrepreneurship. To account for future challenges, behavioural, attitudinal and cultural changes must be implemented successfully. At a societal level, people are increasingly going hand in hand with robotics and artificial intelligence in sharing expertise and producing outcome. This needs consideration on ethics and values, both for organisations and individuals. At an organisational level, risk assessment bodies will have to tap into new talent pools and new solutions for a more fluid and ad hoc-based workforce. Future risk assessment bodies will have to actively engage with stakeholders when performing their assessments. It is expected that the impacts of citizen science and involvement of the crowd will become part of risk assessment practices. Consequently, EFSA will have to continue to invest in massive, ongoing skills development programmes. At an individual level, potential recruits will need to be assessed against a whole new set of competencies and capabilities: technical competencies in data science, computational science and artificial intelligence, alongside a large set of soft skills.

Naydenova Svetla, de Luca Lucia, Yamadjako Selomey

2019-Jul

citizen science, crowdsourcing, expertise, food safety, scientific advice, stakeholder engagement

General General

Working with a new kind of team: harnessing the wisdom of the crowd in trial identification.

In EFSA journal. European Food Safety Authority

BACKGROUND : At a time when research output is expanding exponentially, citizen science, the process of engaging willing volunteers in scientific research activities, has an important role to play in helping to manage the information overload. It also creates a model of contribution that enables anyone with an interest in health to contribute meaningfully and in a way that is flexible. Citizen science models have been shown to be extremely effective in other domains such as astronomy and ecology.

METHODS : Cochrane Crowd (crowd.cochrane.org) is a citizen science platform that offers contributors a range of microtasks, designed to help identify and describe health research. The platform enables contributors to dive into needed tasks that capture and describe health evidence. Brief interactive training modules and agreement algorithms help to ensure accurate collective decision making. Contributors can work online or offline; they can view their activity and performance in detail. They can choose to work in topic areas of interest to them such dementia or diabetes, and as contributors progress, they unlock milestone rewards and new tasks. Cochrane Crowd was launched in May 2016. It now hosts a range of microtasks which help to identify health evidence and then describe it according to a PICO (Population; Intervention; Comparator; Outcome) ontology. The microtasks are either at 'citation level' in which a contributor is presented with a title and abstract to classify or annotate, or at the full-text level in which a whole or a portion of a full paper is displayed.

RESULTS : To date (March 2019), the Cochrane Crowd community comprises over 12,000 contributors from more than 180 countries. Almost 3 million individual classifications have been made, and around 70,000 reports of randomised trials have been identified for Cochrane's Central Register of Controlled Trials. Performance evaluations to assess crowd accuracy have shown crowd sensitivity is 99.1%, and crowd specificity is 99%. Main motivations for involvement are that people want to help Cochrane, and people want to learn.

CONCLUSION : This model of contribution is now an established part of Cochrane's effort to manage the deluge of information produced in a way that offers contributors a chance to get involved, learn and play a crucial role in evidence production. Our experience has shown that people want to be involved and that, with little or no prior experience, can do certain tasks to a very high degree of collective accuracy. Using a citizen science approach effectively has enabled Cochrane to better support its expert community through better use of human effort. It has also generated large, high-quality data sets on a scale not carried out before which has provided training material for machine learning routines. Citizen science is not an easy option, but performed well it brings a wealth of advantages to both the citizen and the organisation.

Noel-Storr Anna

2019-Jul

citizen science, crowdsourcing, meta‐analysis, microtask, randomised controlled trial, systematic review

General General

Predicting toxicity of chemicals: software beats animal testing.

In EFSA journal. European Food Safety Authority

We created earlier a large machine-readable database of 10,000 chemicals and 800,000 associated studies by natural language processing of the public parts of Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) registrations until December 2014. This database was used to assess the reproducibility of the six most frequently used Organisation for Economic Co-operation and Development (OECD) guideline tests. These tests consume 55% of all animals in safety testing in Europe, i.e. about 600,000 animals. With 350-750 chemicals with multiple results per test, reproducibility (balanced accuracy) was 81% and 69% of toxic substances were found again in a repeat experiment (sensitivity 69%). Inspired by the increasingly used read-across approach, we created a new type of QSAR, which is based on similarity of chemicals and not on chemical descriptors. A landscape of the chemical universe using 10 million structures was calculated, when based on Tanimoto indices similar chemicals are close and dissimilar chemicals far from each other. This allows placing any chemical of interest into the map and evaluating the information available for surrounding chemicals. In a data fusion approach, in which 74 different properties were taken into consideration, machine learning (random forest) allowed a fivefold cross-validation for 190,000 (non-) hazard labels of chemicals for which nine hazards were predicted. The balanced accuracy of this approach was 87% with a sensitivity of 89%. Each prediction comes with a certainty measure based on the homogeneity of data and distance of neighbours. Ongoing developments and future opportunities are discussed.

Hartung Thomas

2019-Jul

alternatives to animal testing, computational toxicology, read‐across, risk assessment

General General

Fast computation of genome-metagenome interaction effects.

In Algorithms for molecular biology : AMB

Motivation : Association studies have been widely used to search for associations between common genetic variants observations and a given phenotype. However, it is now generally accepted that genes and environment must be examined jointly when estimating phenotypic variance. In this work we consider two types of biological markers: genotypic markers, which characterize an observation in terms of inherited genetic information, and metagenomic marker which are related to the environment. Both types of markers are available in their millions and can be used to characterize any observation uniquely.

Objective : Our focus is on detecting interactions between groups of genetic and metagenomic markers in order to gain a better understanding of the complex relationship between environment and genome in the expression of a given phenotype.

Contributions : We propose a novel approach for efficiently detecting interactions between complementary datasets in a high-dimensional setting with a reduced computational cost. The method, named SICOMORE, reduces the dimension of the search space by selecting a subset of supervariables in the two complementary datasets. These supervariables are given by a weighted group structure defined on sets of variables at different scales. A Lasso selection is then applied on each type of supervariable to obtain a subset of potential interactions that will be explored via linear model testing.

Results : We compare SICOMORE with other approaches in simulations, with varying sample sizes, noise, and numbers of true interactions. SICOMORE exhibits convincing results in terms of recall, as well as competitive performances with respect to running time. The method is also used to detect interaction between genomic markers in Medicago truncatula and metagenomic markers in its rhizosphere bacterial community.

Software availability : An R package is available [4], along with its documentation and associated scripts, allowing the reader to reproduce the results presented in the paper.

Guinot Florent, Szafranski Marie, Chiquet Julien, Zancarini Anouk, Le Signor Christine, Mougel Christophe, Ambroise Christophe

2020

Dimensionality reduction, GWAS, Gene-environement interactions, Genetic and metagenomic markers, Statistical machine learning, Variable selection

General General

Artificial Intelligence and Machine Learning Applied at the Point of Care.

In Frontiers in pharmacology

Introduction : The increasing availability of healthcare data and rapid development of big data analytic methods has opened new avenues for use of Artificial Intelligence (AI)- and Machine Learning (ML)-based technology in medical practice. However, applications at the point of care are still scarce.

Objective : Review and discuss case studies to understand current capabilities for applying AI/ML in the healthcare setting, and regulatory requirements in the US, Europe and China.

Methods : A targeted narrative literature review of AI/ML based digital tools was performed. Scientific publications (identified in PubMed) and grey literature (identified on the websites of regulatory agencies) were reviewed and analyzed.

Results : From the regulatory perspective, AI/ML-based solutions can be considered medical devices (i.e., Software as Medical Device, SaMD). A case series of SaMD is presented. First, tools for monitoring and remote management of chronic diseases are presented. Second, imaging applications for diagnostic support are discussed. Finally, clinical decision support tools to facilitate the choice of treatment and precision dosing are reviewed. While tested and validated algorithms for precision dosing exist, their implementation at the point of care is limited, and their regulatory and commercialization pathway is not clear. Regulatory requirements depend on the level of risk associated with the use of the device in medical practice, and can be classified into administrative (manufacturing and quality control), software-related (design, specification, hazard analysis, architecture, traceability, software risk analysis, cybersecurity, etc.), clinical evidence (including patient perspectives in some cases), non-clinical evidence (dosing validation and biocompatibility/toxicology) and other, such as e.g. benefit-to-risk determination, risk assessment and mitigation. There generally is an alignment between the US and Europe. China additionally requires that the clinical evidence is applicable to the Chinese population and recommends that a third-party central laboratory evaluates the clinical trial results.

Conclusions : The number of promising AI/ML-based technologies is increasing, but few have been implemented widely at the point of care. The need for external validation, implementation logistics, and data exchange and privacy remain the main obstacles.

Angehrn Zuzanna, Haldna Liina, Zandvliet Anthe S, Gil Berglund Eva, Zeeuw Joost, Amzal Billy, Cheung S Y Amy, Polasek Thomas M, Pfister Marc, Kerbusch Thomas, Heckman Niedre M

2020

Artificial Intelligence and Machine Learning in medical practice, chronic disease management, clinical decision support tools, model-informed precision dosing, precision dosing, real-world evidence, software as a medical device

Radiology Radiology

A Deep Learning-Based Model for Classification of Different Subtypes of Subcortical Vascular Cognitive Impairment With FLAIR.

In Frontiers in neuroscience ; h5-index 72.0

Deep learning methods have shown their great capability of extracting high-level features from image and have been used for effective medical imaging classification recently. However, training samples of medical images are restricted by the amount of patients as well as medical ethics issues, making it hard to train the neural networks. In this paper, we propose a novel end-to-end three-dimensional (3D) attention-based residual neural network (ResNet) architecture to classify different subtypes of subcortical vascular cognitive impairment (SVCI) with single-shot T2-weighted fluid-attenuated inversion recovery (FLAIR) sequence. Our aim is to develop a convolutional neural network to provide a convenient and effective way to assist doctors in the diagnosis and early treatment of the different subtypes of SVCI. The experiment data in this paper are collected from 242 patients from the Neurology Department of Renji Hospital, including 78 amnestic mild cognitive impairment (a-MCI), 70 nonamnestic MCI (na-MCI), and 94 no cognitive impairment (NCI). The accuracy of our proposed model has reached 98.6% on a training set and 97.3% on a validation set. The test accuracy on an untrained testing set reaches 93.8% with robustness. Our proposed method can provide a convenient and effective way to assist doctors in the diagnosis and early treatment.

Chen Qi, Wang Yao, Qiu Yage, Wu Xiaowei, Zhou Yan, Zhai Guangtao

2020

cognitive impairment, convolutional neural network, deep learning, magnetic resonance imaging, subcortical ischemic vascular disease

General General

Enhancing droplet-based single-nucleus RNA-seq resolution using the semi-supervised machine learning classifier DIEM.

In Scientific reports ; h5-index 158.0

Single-nucleus RNA sequencing (snRNA-seq) measures gene expression in individual nuclei instead of cells, allowing for unbiased cell type characterization in solid tissues. We observe that snRNA-seq is commonly subject to contamination by high amounts of ambient RNA, which can lead to biased downstream analyses, such as identification of spurious cell types if overlooked. We present a novel approach to quantify contamination and filter droplets in snRNA-seq experiments, called Debris Identification using Expectation Maximization (DIEM). Our likelihood-based approach models the gene expression distribution of debris and cell types, which are estimated using EM. We evaluated DIEM using three snRNA-seq data sets: (1) human differentiating preadipocytes in vitro, (2) fresh mouse brain tissue, and (3) human frozen adipose tissue (AT) from six individuals. All three data sets showed evidence of extranuclear RNA contamination, and we observed that existing methods fail to account for contaminated droplets and led to spurious cell types. When compared to filtering using these state of the art methods, DIEM better removed droplets containing high levels of extranuclear RNA and led to higher quality clusters. Although DIEM was designed for snRNA-seq, our clustering strategy also successfully filtered single-cell RNA-seq data. To conclude, our novel method DIEM removes debris-contaminated droplets from single-cell-based data fast and effectively, leading to cleaner downstream analysis. Our code is freely available for use at https://github.com/marcalva/diem.

Alvarez Marcus, Rahmani Elior, Jew Brandon, Garske Kristina M, Miao Zong, Benhammou Jihane N, Ye Chun Jimmie, Pisegna Joseph R, Pietiläinen Kirsi H, Halperin Eran, Pajukanta Päivi

2020-Jul-03

General General

Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions.

In Nature communications ; h5-index 260.0

Global climate models represent small-scale processes such as convection using subgrid models known as parameterizations, and these parameterizations contribute substantially to uncertainty in climate projections. Machine learning of new parameterizations from high-resolution model output is a promising approach, but such parameterizations have been prone to issues of instability and climate drift, and their performance for different grid spacings has not yet been investigated. Here we use a random forest to learn a parameterization from coarse-grained output of a three-dimensional high-resolution idealized atmospheric model. The parameterization leads to stable simulations at coarse resolution that replicate the climate of the high-resolution simulation. Retraining for different coarse-graining factors shows the parameterization performs best at smaller horizontal grid spacings. Our results yield insights into parameterization performance across length scales, and they also demonstrate the potential for learning parameterizations from global high-resolution simulations that are now emerging.

Yuval Janni, O’Gorman Paul A

2020-Jul-03

Public Health Public Health

Dual RNA-seq of Orientia tsutsugamushi informs on host-pathogen interactions for this neglected intracellular human pathogen.

In Nature communications ; h5-index 260.0

Studying emerging or neglected pathogens is often challenging due to insufficient information and absence of genetic tools. Dual RNA-seq provides insights into host-pathogen interactions, and is particularly informative for intracellular organisms. Here we apply dual RNA-seq to Orientia tsutsugamushi (Ot), an obligate intracellular bacterium that causes the vector-borne human disease scrub typhus. Half the Ot genome is composed of repetitive DNA, and there is minimal collinearity in gene order between strains. Integrating RNA-seq, comparative genomics, proteomics, and machine learning to study the transcriptional architecture of Ot, we find evidence for wide-spread post-transcriptional antisense regulation. Comparing the host response to two clinical isolates, we identify distinct immune response networks for each strain, leading to predictions of relative virulence that are validated in a mouse infection model. Thus, dual RNA-seq can provide insight into the biology and host-pathogen interactions of a poorly characterized and genetically intractable organism such as Ot.

Mika-Gospodorz Bozena, Giengkam Suparat, Westermann Alexander J, Wongsantichon Jantana, Kion-Crosby Willow, Chuenklin Suthida, Wang Loo Chien, Sunyakumthorn Piyanate, Sobota Radoslaw M, Subbian Selvakumar, Vogel Jörg, Barquist Lars, Salje Jeanne

2020-Jul-03

Ophthalmology Ophthalmology

Classification of pachychoroid disease on ultrawide-field indocyanine green angiography using auto-machine learning platform.

In The British journal of ophthalmology

AIMS : Automatic identification of pachychoroid maybe used as an adjunctive method to confirm the condition and be of help in treatment for macular diseases. This study investigated the feasibility of classifying pachychoroid disease on ultra-widefield indocyanine green angiography (UWF ICGA) images using an automated machine-learning platform.

METHODS : Two models were trained with a set including 783 UWF ICGA images of patients with pachychoroid (n=376) and non-pachychoroid (n=349) diseases using the AutoML Vision (Google). Pachychoroid was confirmed using quantitative and qualitative choroidal morphology on multimodal imaging by two retina specialists. Model 1 used the original and Model 2 used images of the left eye horizontally flipped to the orientation of the right eye to increase accuracy by equalising the mirror image of the right eye and left eye. The performances were compared with those of human experts.

RESULTS : In total, 284, 279 and 220 images of central serous chorioretinopathy, polypoidal choroidal vasculopathy and neovascular age-related maculopathy were included. The precision and recall were 87.84% and 87.84% for Model 1 and 89.19% and 89.19% for Model 2, which were comparable to the results of the retinal specialists (90.91% and 95.24%) and superior to those of ophthalmic residents (68.18% and 92.50%).

CONCLUSIONS : Auto machine-learning platform can be used in the classification of pachychoroid on UWF ICGA images after careful consideration for pachychoroid definition and limitation of the platform including unstable performance on the medical image.

Kim In Ki, Lee Kook, Park Jae Hyun, Baek Jiwon, Lee Won Ki

2020-Jul-03

Retina

Pathology Pathology

Digital pathology and artificial intelligence will be key to supporting clinical and academic cellular pathology through COVID-19 and future crises: the PathLAKE consortium perspective.

In Journal of clinical pathology

The measures to control the COVID-19 outbreak will likely remain a feature of our working lives until a suitable vaccine or treatment is found. The pandemic has had a substantial impact on clinical services, including cancer pathways. Pathologists are working remotely in many circumstances to protect themselves, colleagues, family members and the delivery of clinical services. The effects of COVID-19 on research and clinical trials have also been significant with changes to protocols, suspensions of studies and redeployment of resources to COVID-19. In this article, we explore the specific impact of COVID-19 on clinical and academic pathology and explore how digital pathology and artificial intelligence can play a key role to safeguarding clinical services and pathology-based research in the current climate and in the future.

Browning Lisa, Colling Richard, Rakha Emad, Rajpoot Nasir, Rittscher Jens, James Jacqueline A, Salto-Tellez Manuel, Snead David R J, Verrill Clare

2020-Jul-03

computer systems, image processing, computer-assisted, pathology, surgical

General General

Perceived age and perceived health among a Chinese cohort: does it mean the same thing?

In International journal of cosmetic science

BACKGROUND & AIMS : Previous investigations have aimed at investigating parameters affecting age perception on several ethnicities. Perceived health has been a newer focus on Caucasian skin, yet little is known on the skin features used to estimate the health status of Chinese women and we aimed to investigate whether these cues are the same as those used for age perception.

METHODS : Age and health appearance of 276 Chinese female volunteers was estimated from their photographs by 1025 female naïve Chinese graders 20-69 years old. Models were built to predict perceived age and health from topographic, colour and biophysical measured variables, in two subsets of the studied volunteers: below and above 50 years. Machine learning-based predictive models for age and health perception were built on the collected data and the interpretability of the models was established by measuring feature importance.

RESULTS : Age perception was mostly driven by topographic features, particularly eye bags and eye lid sagging in the group below 50 years old. Wrinkles, notably from the lower part of the face and oval of the lower face, were found to be more relevant in the group above 50 years. Health appearance was primarily signaled by skin imperfections and global pigmentation in the subset below 50 years, while colour related parameters and skin hydration acted as health cues for the subset above 50 years.

CONCLUSION : Distinct skin features were acting as cues for age perception and/or health perception and varied per age subset. Their contribution should be borne in mind when designing products for "younger looking skin" and "healthier looking skin".

Messaraa Cyril, Richard Thibaud J C, Walsh Melissa, Doyle Leah, O’Connor Carla, Robertson Nicola, Mansfield Anna, Hurley Sarah, Mavon Alain, Grenz Annika

2020-Jul-04

China, perceived age, perceived health, skin ageing, skin health

Radiology Radiology

Ischemia and outcome prediction by cardiac CT based machine learning.

In The international journal of cardiovascular imaging

Cardiac CT using non-enhanced coronary artery calcium scoring (CACS) and coronary CT angiography (cCTA) has been proven to provide excellent evaluation of coronary artery disease (CAD) combining anatomical and morphological assessment of CAD for cardiovascular risk stratification and therapeutic decision-making, in addition to providing prognostic value for the occurrence of adverse cardiac outcome. In recent years, artificial intelligence (AI) and, in particular, the application of machine learning (ML) algorithms, have been promoted in cardiovascular CT imaging for improved decision pathways, risk stratification, and outcome prediction in a more objective, reproducible, and rational manner. AI is based on computer science and mathematics that are based on big data, high performance computational infrastructure, and applied algorithms. The application of ML in daily routine clinical practice may hold potential to improve imaging workflow and to promote better outcome prediction and more effective decision-making in patient management. Moreover, CT represents a field wherein ML may be particularly useful, such as CACS and cCTA. Thus, the purpose of this review is to give a short overview about the contemporary state of ML based algorithms in cardiac CT, as well as to provide clinicians with currently available scientific data on clinical validation and implementation of these algorithms for the prediction of ischemia-specific CAD and cardiovascular outcome.

Brandt Verena, Emrich Tilman, Schoepf U Joseph, Dargis Danielle M, Bayer Richard R, De Cecco Carlo N, Tesche Christian

2020-Jul-04

Coronary CT angiography, Coronary artery disease, Machine learning, Outcome prediction

General General

Machine Learning for Work Disability Prevention: Introduction to the Special Series.

In Journal of occupational rehabilitation

Rapid development in computer technology has led to sophisticated methods of analyzing large datasets with the aim of improving human decision making. Artificial Intelligence and Machine Learning (ML) approaches hold tremendous potential for solving complex real-world problems such as those faced by stakeholders attempting to prevent work disability. These techniques are especially appealing in work disability contexts that collect large amounts of data such as workers' compensation settings, insurance companies, large corporations, and health care organizations, among others. However, the approaches require thorough evaluation to determine if they add value to traditional statistical approaches. In this special series of articles, we examine the role and value of ML in the field of work disability prevention and occupational rehabilitation.

Gross Douglas P, Steenstra Ivan A, Harrell Frank E, Bellinger Colin, Zaïane Osmar

2020-Jul-04

Artificial Intelligence, Classification, Compensation and redress, Prediction, Rehabilitation

Surgery Surgery

Adherence Tracking With Smart Watches for Shoulder Physiotherapy in Rotator Cuff Pathology: Protocol for a Longitudinal Cohort Study.

In JMIR research protocols ; h5-index 26.0

BACKGROUND : Physiotherapy is essential for the successful rehabilitation of common shoulder injuries and following shoulder surgery. Patients may receive some training and supervision for shoulder physiotherapy through private pay or private insurance, but they are typically responsible for performing most of their physiotherapy independently at home. It is unknown how often patients perform their home exercises and if these exercises are performed correctly without supervision. There are no established tools for measuring this. It is, therefore, unclear if the full benefit of shoulder physiotherapy treatments is being realized.

OBJECTIVE : The proposed research will (1) validate a smartwatch and machine learning (ML) approach for evaluating adherence to shoulder exercise participation and technique in a clinical patient population with rotator cuff pathology; (2) quantify the rate of home physiotherapy adherence, determine the effects of adherence on recovery, and identify barriers to successful adherence; and (3) develop and pilot test an ethically conscious adherence-driven rehabilitation program that individualizes patient care based on their capacity to effectively participate in their home physiotherapy.

METHODS : This research will be conducted in 2 phases. The first phase is a prospective longitudinal cohort study, involving 120 patients undergoing physiotherapy for rotator cuff pathology. Patients will be issued a smartwatch that will record 9-axis inertial sensor data while they perform physiotherapy exercises both in the clinic and in the home setting. The data collected in the clinic under supervision will be used to train and validate our ML algorithms that classify shoulder physiotherapy exercise. The validated algorithms will then be used to assess home physiotherapy adherence from the inertial data collected at home. Validated outcome measures, including the Disabilities of the Arm, Shoulder, and Hand questionnaire; Numeric Pain Rating Scale; range of motion; shoulder strength; and work status, will be collected pretreatment, monthly through treatment, and at a final follow-up of 12 months. We will then relate improvement in patient outcomes to measured physiotherapy adherence and patient baseline variables in univariate and multivariate analyses. The second phase of this research will involve the evaluation of a novel rehabilitation program in a cohort of 20 patients. The program will promote patient physiotherapy engagement via the developed technology and support adherence-driven care decisions.

RESULTS : As of December 2019, 71 patients were screened for enrollment in the noninterventional validation phase of this study; 65 patients met the inclusion and exclusion criteria. Of these, 46 patients consented and 19 declined to participate in the study. Only 2 patients de-enrolled from the study and data collection is ongoing for the remaining 44.

CONCLUSIONS : This study will provide new and important insights into shoulder physiotherapy adherence, the relationship between adherence and recovery, barriers to better adherence, and methods for addressing them.

INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) : DERR1-10.2196/17841.

Burns David, Razmjou Helen, Shaw James, Richards Robin, McLachlin Stewart, Hardisty Michael, Henry Patrick, Whyne Cari

2020-Jul-05

machine learning, rehabilitation, rotator cuff, treatment adherence and compliance, wearable electronic devices

General General

Use of theory to guide development and application of sensor technologies in Nursing.

In Nursing outlook ; h5-index 33.0

Sensor technologies for health care, research, and consumers have expanded and evolved rapidly. Many technologies developed in commercial or engineering spaces, lack theoretical grounding and scientific evidence to support their need, safety, and efficacy. Theory is a mechanism for synthesizing and guiding knowledge generation for the discipline of nursing, including the design, implementation, and evaluation of sensors and related technologies such as artificial intelligence and machine learning. In this paper, three nurse scientists summarize their presentations at the Council for the Advancement of Nursing Science 2019 Advanced Methods Conference on Expanding Science of Sensor Technology in Research discussing the theoretical underpinnings of sensor technologies development and use in nursing research and practice. Multiple theories with diverse epistemological roots guide decision-making about whether or not to apply sensors to a given use; development of, components of, and mechanisms by which sensor technologies are expected to work; and possible outcomes.

Gance-Cleveland Bonnie, McDonald Catherine C, Walker Rachel K

2020-Jun-30

Sensor technologies, Theory guided decision-making in sensor technologies, Theory guided technology development

General General

Machine learning algorithms to predict early pregnancy loss after in vitro fertilization-embryo transfer with fetal heart rate as a strong predictor.

In Computer methods and programs in biomedicine

BACKGROUND AND OBJECTIVE : According to previous studies, after in vitro fertilization-embryo transfer (IVF-ET) there exist a high early pregnancy loss (EPL) rate. The objectives of this study were to construct a prediction model of embryonic development by using machine learning algorithms based on historical case data, in this way doctors can make more accurate suggestions on the number of patient follow-ups, and provide decision support for doctors who are relatively inexperienced in clinical practice.

METHODS : We analyzed the significance of the same type of features between ongoing pregnancy samples and EPL samples. At the same time, by analyzing the correlation between days after embryo transfer (ETD) and fetal heart rate (FHR) of those normal embryo samples, a regression model between the two was established to obtain FHR model of normal development, and the residual analysis was used to further clarify the importance of FHR in predicting pregnancy outcome. Finally we applied six representative machine learning algorithms including Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), Back Propagation Neural Network (BNN), XGBoost and Random Forest (RF) to build prediction models. Sensitivity was selected to evaluate prediction results, and accuracy of what each algorithm above predicted under both the conditions with and without FHR was compared as well.

RESULTS : There were statically significant differences in the same type of features between ongoing pregnancy samples and EPL samples, which could serve as predictors. FHR, of which the normal development showed a strong correlation with ETD, had great predictive value for embryonic development. Among the six predictive models the one predicted with the highest accuracy was Random Forest, of which recall ratio and F1 could reach 97%, and AUC could reach 0.97, FHR taken into account as a feature. In addition, Random Forest had a higher prediction accuracy rate for samples with longer ETD-its accuracy rate could reach 99% when predicting those at 10 weeks after embryo transfer.

CONCLUSION : In this study, we established and compared six classification models to accurately predict EPL after the appearance of embryonic cardiac activity undergoing IVF-ET. Finally, Random Forest model outperformed the others. The implementation of Random Forest model in clinical environment can assist doctors to make clinical decisions.

Liu Lijue, Jiao Yongxia, Li Xihong, Ouyang Yan, Shi Danni

2020-Jun-25

Fetal heart rate, In vitro fertilization-embryo transfer, Machine learning, Random forest

Public Health Public Health

Characterizing vaping posts on instagram by using unsupervised machine learning.

In International journal of medical informatics ; h5-index 49.0

Electronic cigarettes (e-cigarettes) usage has surged substantially across the globe, particularly among adolescents and young adults. The ever-increasing prevalence of social media makes it highly convenient to access and engage with content on numerous substances, including e-cigarettes. A comprehensive dataset of 560,414 image posts with a mention of #vaping (shared from 1 June 2019 to 31 October 2019) was retrieved by using the Instagram application-programming interface. Deep neural networks were used to extract image features on which unsupervised machine-learning methods were leveraged to cluster and subsequently categorize the images. Descriptive analysis of associated metadata was further conducted to assess the influence of different entities and the use of hashtags within different categories. Seven distinct categories of vaping related images were identified. A majority of the images (40.4 %) depicted e-liquids, followed by e-cigarettes (15.4 %). Around one-tenth (9.9 %) of the dataset consisted of photos with person(s). Considering the number of likes and comments, images portraying person(s) gained the highest engagement. In almost every category, business accounts shared more posts on average compared to the individual accounts. The findings illustrate the high degree of e-cigarettes promotion on a social platform prevalent among youth. Regulatory authorities should enforce policies to restrict product promotion in youth-targeted social media, as well as require measures to prevent underage users' access to this content. Furthermore, a stronger presence of anti-tobacco portrayals on Instagram by public health agencies and anti-tobacco campaigners is needed.

Ketonen Vili, Malik Aqdas

2020-Jun-20

Adolescents, Electronic cigarettes, Instagram, Machine-learning, Photos, Social media, Young adults, e-Cigarettes

General General

The novel approaches to classify cyclist accident injury-severity: Hybrid fuzzy decision mechanisms.

In Accident; analysis and prevention

In this study, two novel fuzzy decision approaches, where the fuzzy logic (FL) model was revised with the C4.5 decision tree (DT) algorithm, were applied to the classification of cyclist injury-severity in bicycle-vehicle accidents. The study aims to evaluate two main research topics. The first one is investigation of the effect of road infrastructure, road geometry, street, accident, atmospheric and cyclist related parameters on the classification of cyclist injury-severity similarly to other studies in the literature. The second one is examination of the performance of the new fuzzy decision approaches described in detail in this study for the classification of cyclist injury-severity. For this purpose, the data set containing bicycle-vehicle accidents in 2013-2017 was analyzed with the classic C4.5 algorithm and two different hybrid fuzzy decision mechanisms, namely DT-based converted FL (DT-CFL) and novel DT-based revised FL (DT-RFL). The model performances were compared according to their accuracy, precision, recall, and F-measure values. The results indicated that the parameters that have the greatest effect on the injury-severity in bicycle-vehicle accidents are gender, vehicle damage-extent, road-type as well as the highly effective parameters such as pavement type, accident type, and vehicle-movement. The most successful classification performance among the three models was achieved by the DT-RFL model with 72.0 % F-measure and 69.96 % Accuracy. With 59.22 % accuracy and %57.5 F-measure values, the DT-CFL model, rules of which were created according to the splitting criteria of C4.5 algorithm, gave worse results in the classification of the injury-severity in bicycle-vehicle accidents than the classical C4.5 algorithm. In light of these results, the use of fuzzy decision mechanism models presented in this study on more comprehensive datasets is recommended for further studies.

Katanalp Burak Yiğit, Eren Ezgi

2020-Jul-02

Cyclist safety, Decision tree, Fuzzy logic, Injury-severity, Machine learning

Cardiology Cardiology

Deep neural networks for ECG-free cardiac phase and end-diastolic frame detection on coronary angiographies.

In Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society

Invasive coronary angiography (ICA) is the gold standard in Coronary Artery Disease (CAD) imaging. Detection of the end-diastolic frame (EDF) and, in general, cardiac phase detection on each temporal frame of a coronary angiography acquisition is of significant importance for the anatomical and non-invasive functional assessment of CAD. This task is generally performed via manual frame selection or semi-automated selection based on simultaneously acquired ECG signals - thus introducing the requirement of simultaneous ECG recordings. In this paper, we evaluate the performance of a purely image based workflow relying on deep neural networks for fully automated cardiac phase and EDF detection on coronary angiographies. A first deep neural network (DNN), trained to detect coronary arteries, is employed to preselect a subset of frames in which coronary arteries are well visible. A second DNN predicts cardiac phase labels for each frame. Only in the training and evaluation phases for the second DNN, ECG signals are used to provide ground truth labels for each angiographic frame. The networks were trained on 56,655 coronary angiographies from 6820 patients and evaluated on 20,780 coronary angiographies from 6261 patients. No exclusion criteria related to patient state (stable or acute CAD), previous interventions (PCI or CABG), or pathology were formulated. Cardiac phase detection had an accuracy of 98.8 %, a sensitivity of 99.3 % and a specificity of 97.6 % on the evaluation set. EDF prediction had a precision of 98.4 % and a recall of 97.9 %. Several sub-group analyses were performed, indicating that the cardiac phase detection performance is largely independent from acquisition angles, the heart rate of the patient, and the angiographic view (LCA / RCA). The average execution time of cardiac phase detection for one angiographic series was on average less than five seconds on a standard workstation. We conclude that the proposed image based workflow potentially obviates the need for manual frame selection and ECG acquisition, representing a relevant step towards automated CAD assessment.

Ciusdel Costin, Turcea Alexandru, Puiu Andrei, Itu Lucian, Calmac Lucian, Weiss Emma, Margineanu Cornelia, Badila Elisabeta, Berger Martin, Redel Thomas, Passerini Tiziano, Gulsun Mehmet, Sharma Puneet

2020-Jun-25

Cardiac phase, Coronary angiography, Coronary artery disease, Deep learning, End-diastolic frame

Radiology Radiology

Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis.

In Medical image analysis

Supervised training of deep learning models requires large labeled datasets. There is a growing interest in obtaining such datasets for medical image analysis applications. However, the impact of label noise has not received sufficient attention. Recent studies have shown that label noise can significantly impact the performance of deep learning models in many machine learning and computer vision applications. This is especially concerning for medical applications, where datasets are typically small, labeling requires domain expertise and suffers from high inter- and intra-observer variability, and erroneous predictions may influence decisions that directly impact human health. In this paper, we first review the state-of-the-art in handling label noise in deep learning. Then, we review studies that have dealt with label noise in deep learning for medical image analysis. Our review shows that recent progress on handling label noise in deep learning has gone largely unnoticed by the medical image analysis community. To help achieve a better understanding of the extent of the problem and its potential remedies, we conducted experiments with three medical imaging datasets with different types of label noise, where we investigated several existing strategies and developed new methods to combat the negative effect of label noise. Based on the results of these experiments and our review of the literature, we have made recommendations on methods that can be used to alleviate the effects of different types of label noise on deep models trained for medical image analysis. We hope that this article helps the medical image analysis researchers and developers in choosing and devising new techniques that effectively handle label noise in deep learning.

Karimi Davood, Dou Haoran, Warfield Simon K, Gholipour Ali

2020-Jun-20

Big data, Deep learning, Label noise, Machine learning, Medical image annotation

General General

Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation.

In Medical image analysis

Although having achieved great success in medical image segmentation, deep learning-based approaches usually require large amounts of well-annotated data, which can be extremely expensive in the field of medical image analysis. Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised learning and unsupervised domain adaptation both take the advantage of unlabeled data, and they are closely related to each other. In this paper, we propose uncertainty-aware multi-view co-training (UMCT), a unified framework that addresses these two tasks for volumetric medical image segmentation. Our framework is capable of efficiently utilizing unlabeled data for better performance. We firstly rotate and permute the 3D volumes into multiple views and train a 3D deep network on each view. We then apply co-training by enforcing multi-view consistency on unlabeled data, where an uncertainty estimation of each view is utilized to achieve accurate labeling. Experiments on the NIH pancreas segmentation dataset and a multi-organ segmentation dataset show state-of-the-art performance of the proposed framework on semi-supervised medical image segmentation. Under unsupervised domain adaptation settings, we validate the effectiveness of this work by adapting our multi-organ segmentation model to two pathological organs from the Medical Segmentation Decathlon Datasets. Additionally, we show that our UMCT-DA model can even effectively handle the challenging situation where labeled source data is inaccessible, demonstrating strong potentials for real-world applications.

Xia Yingda, Yang Dong, Yu Zhiding, Liu Fengze, Cai Jinzheng, Yu Lequan, Zhu Zhuotun, Xu Daguang, Yuille Alan, Roth Holger

2020-Jun-27

Domain adaptation, Segmentation, Semi-supervised learning, Uncertainty estimation

Pathology Pathology

Yottixel - An Image Search Engine for Large Archives of Histopathology Whole Slide Images.

In Medical image analysis

With the emergence of digital pathology, searching for similar images in large archives has gained considerable attention. Image retrieval can provide pathologists with unprecedented access to the evidence embodied in already diagnosed and treated cases from the past. This paper proposes a search engine specialized for digital pathology, called Yottixel, a portmanteau for "one yotta pixel," alluding to the big-data nature of histopathology images. The most impressive characteristic of Yottixel is its ability to represent whole slide images (WSIs) in a compact manner. Yottixel can perform millions of searches in real-time with a high search accuracy and low storage profile. Yottixel uses an intelligent indexing algorithm capable of representing WSIs with a mosaic of patches which are then converted into barcodes, called "Bunch of Barcodes" (BoB), the most prominent performance enabler of Yottixel. The performance of the prototype platform is qualitatively tested using 300 WSIs from the University of Pittsburgh Medical Center (UPMC) and 2,020 WSIs from The Cancer Genome Atlas Program (TCGA) provided by the National Cancer Institute. Both datasets amount to more than 4,000,000 patches of 1000 × 1000 pixels. We report three sets of experiments that show that Yottixel can accurately retrieve organs and malignancies, and its semantic ordering shows good agreement with the subjective evaluation of human observers.

Kalra Shivam, Tizhoosh H R, Choi Charles, Shah Sultaan, Diamandis Phedias, Campbell Clinton J V, Pantanowitz Liron

2020-Jun-24

Deep Learning, Digital Pathology, Image Search

General General

Local rotation invariance in 3D CNNs.

In Medical image analysis

Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable filterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. LRI designs allow learning filters accounting for all orientations, which enables a drastic reduction of trainable parameters and training data when compared to standard 3D CNNs. In this paper, we propose and compare several methods to obtain LRI CNNs with directional sensitivity. Two methods use orientation channels (responses to rotated kernels), either by explicitly rotating the kernels or using steerable filters. These orientation channels constitute a locally rotation equivariant representation of the data. Local pooling across orientations yields LRI image analysis. Steerable filters are used to achieve a fine and efficient sampling of 3D rotations as well as a reduction of trainable parameters and operations, thanks to a parametric representations involving solid Spherical Harmonics (SH),which are products of SH with associated learned radial profiles. Finally, we investigate a third strategy to obtain LRI based on rotational invariants calculated from responses to a learned set of solid SHs. The proposed methods are evaluated and compared to standard CNNs on 3D datasets including synthetic textured volumes composed of rotated patterns, and pulmonary nodule classification in CT. The results show the importance of LRI image analysis while resulting in a drastic reduction of trainable parameters, outperforming standard 3D CNNs trained with rotational data augmentation.

Andrearczyk Vincent, Fageot Julien, Oreiller Valentin, Montet Xavier, Depeursinge Adrien

2020-Jun-20

3D Texture, Convolutional neural network, Local rotation invariance, Steerable filters

Radiology Radiology

Deep learning evaluation of pelvic radiographs for position, hardware presence, and fracture detection.

In European journal of radiology ; h5-index 47.0

PURPOSE : Recent papers have shown the utility of deep learning in detecting hip fractures with pelvic radiographs, but there is a paucity of research utilizing deep learning to detect pelvic and acetabular fractures. Creating deep learning models also requires appropriately labeling x-ray positions and hardware presence. Our purpose is to train and test deep learning models to detect pelvic radiograph position, hardware presence, and pelvic and acetabular fractures in addition to hip fractures.

MATERIAL AND METHODS : Data was retrospectively acquired between 8/2009-6/2019. A subset of the data was split into 4 position labels and 2 hardware labels to create position labeling and hardware detecting models. The remaining data was parsed with these trained models, labeled based on 6 "separate" fracture patterns, and various fracture detecting models were created. A receiver operator characteristic (ROC) curve, area under the curve (AUC), and other output metrics were evaluated.

RESULTS : The position and hardware models performed well with AUC of 0.99-1.00. The AUC for proximal femoral fracture detection was as high as 0.95, which was in line with previously published research. Pelvic and acetabular fracture detection performance was as low as 0.70 for the posterior pelvis category and as high as 0.85 for the acetabular category with the "separate" fracture model.

CONCLUSION : We successfully created deep learning models that can detect pelvic imaging position, hardware presence, and pelvic and acetabular fractures with AUC loss of only 0.03 for proximal femoral fracture.

Kitamura Gene

2020-Jun-21

Artificial intelligence, Deep learning, Fracture, Machine learning, Radiographs

General General

Coherence of achromatic, primary and basic classes of colour categories.

In Vision research ; h5-index 38.0

A range of explanations have been advanced for the systems of colour names found in different languages. Some explanations give special, fundamental status to a subset of colour categories. We argue that a subset of colour categories, if fundamental, will be coherent - meaning that a non-trivial criterion distinguishes them from the other colour categories. We test the coherence of subsets of achromatic (white, black and grey), primary (white, black, red, green, yellow, blue) and basic (primaries plus brown, orange, purple, pink and grey) colour categories in English. Criteria for defining colour categories were expressed in terms of behavioural, linguistic and geometric features derived from colour naming and linguistic usage data; and were discovered using machine learning methods. We find that achromatic and basic colour categories are coherent subsets but not primaries. These results support claims that the basic colour categories have special status, and undermine claims about the fundamental role of primaries in colour naming systems.

Mylonas Dimitris, Griffin Lewis D

2020-Jul-02

Achromatic, Basic, Colour cognition, Colour vision, Crowdsourcing, Primary

Public Health Public Health

Single-cell ATAC-seq signal extraction and enhancement with SCATE.

In Genome biology ; h5-index 114.0

Single-cell sequencing assay for transposase-accessible chromatin (scATAC-seq) is the state-of-the-art technology for analyzing genome-wide regulatory landscapes in single cells. Single-cell ATAC-seq data are sparse and noisy, and analyzing such data is challenging. Existing computational methods cannot accurately reconstruct activities of individual cis-regulatory elements (CREs) in individual cells or rare cell subpopulations. We present a new statistical framework, SCATE, that adaptively integrates information from co-activated CREs, similar cells, and publicly available regulome data to substantially increase the accuracy for estimating activities of individual CREs. We demonstrate that SCATE can be used to better reconstruct the regulatory landscape of a heterogeneous sample.

Ji Zhicheng, Zhou Weiqiang, Hou Wenpin, Ji Hongkai

2020-Jul-03

Bioinformatics, Chromatin, DNase-seq, Gene regulation, Genomics, Machine learning, Single cell, Software, Statistical modeling, scATAC-seq

Radiology Radiology

Social media's role in the perception of radiologists and artificial intelligence.

In Clinical imaging

Social media are impacting all industries and changing the way daily interactions take place. This has been notable in health care as it allows a mechanism to connect patients directly to physicians, advocacy groups, and health care information. Recently, the development of artificial intelligence (AI) applications in radiology has drawn media attention. This has generated a conversation on social media about the expendable role of a radiologist. Often, articles in the lay press have little medical expertise informing opinions about artificial intelligence in radiology. We propose solutions for radiologists to take the lead in the narrative on social media about the role of AI in radiology to better inform and shape public perception about the role of AI in radiology.

Gupta Sonia, Kattapuram Taj M, Patel Tirath Y

2020-Jun-15

Artificial intelligence, “Physicians role”, Physician-patient relations, Radiologists, Radiology, Social media

Radiology Radiology

Stable biomarker identification for predicting schizophrenia in the human connectome.

In NeuroImage. Clinical

Schizophrenia, as a psychiatric disorder, has recognized brain alterations both at the structural and at the functional magnetic resonance imaging level. The developing field of connectomics has attracted much attention as it allows researchers to take advantage of powerful tools of network analysis in order to study structural and functional connectivity abnormalities in schizophrenia. Many methods have been proposed to identify biomarkers in schizophrenia, focusing mainly on improving the classification performance or performing statistical comparisons between groups. However, the stability of biomarkers selection has been for long overlooked in the connectomics field. In this study, we follow a machine learning approach where the identification of biomarkers is addressed as a feature selection problem for a classification task. We perform a recursive feature elimination and support vector machines (RFE-SVM) approach to identify the most meaningful biomarkers from the structural, functional, and multi-modal connectomes of healthy controls and patients. Furthermore, the stability of the retrieved biomarkers is assessed across different subsamplings of the dataset, allowing us to identify the affected core of the pathology. Considering our technique altogether, it demonstrates a principled way to achieve both accurate and stable biomarkers while highlighting the importance of multi-modal approaches to brain pathology as they tend to reveal complementary information.

Gutiérrez-Gómez Leonardo, Vohryzek Jakub, Chiêm Benjamin, Baumann Philipp S, Conus Philippe, Cuenod Kim Do, Hagmann Patric, Delvenne Jean-Charles

2020-Jun-19

General General

GCN-BMP: Investigating Graph Representation Learning for DDI Prediction Task.

In Methods (San Diego, Calif.)

The pharmacological activity of one drug may be changed unexpectedly, owing to the concurrent administration of another drug. It is likely to cause unexpected drug-drug interactions (DDIs). Several machine learning approaches have been proposed to predict the occurrence of DDIs. However, existing approaches are almost dependent heavily on various drug-related features, which may incur noisy inductive bias. To alleviate this problem, we investigate the utilization of the end-to-end graph representation learning for the DDI prediction task. We establish a novel DDI prediction method named GCN-BMP (Graph Convolutional Network wth Bond-aware Message Propagation) to conduct the accurate prediction for DDIs. Our experiments on two real-world datasets demonstrate that GCN-BMP can achieve higher performance when compared to various baseline approaches. Moreover, in the light of the self-contained attention mechanism in our GCN-BMP, we could find the most vital local atoms which are conforming to domain knowledge with certain interpretability.

Chen Xin, Liu China Xien, Wu China Ji

2020-Jul-02

DDI, Graph Representation Learning, Interpretability, Robustness, Scalability

Public Health Public Health

Social Media based Surveillance Systems for Healthcare using Machine Learning: A Systematic Review.

In Journal of biomedical informatics ; h5-index 55.0

BACKGROUND : Real-time surveillance in the field of health informatics has emerged as a growing domain of interest among worldwide researchers. Evolution in this field has helped in the introduction of various initiatives related to public health informatics. Surveillance systems in the area of health informatics utilizing social media information have been developed for early prediction of disease outbreaks and to monitor diseases. In the past few years, the availability of social media data, particularly Twitter data, enabled real-time syndromic surveillance that provides immediate analysis and instant feedback to those who are charged with follow-ups and investigation of potential outbreaks. In this paper, we review the recent work, trends, and machine learning(ML) text classification approaches used by surveillance systems seeking social media data in the healthcare domain. We also highlight the limitations and challenges followed by possible future directions that can be taken further in this domain.

METHODS : To study the landscape of research in health informatics performing surveillance of the various health-related data posted on social media or web-based platforms, we present a bibliometric analysis of the 1240 publications indexed in multiple scientific databases(IEEE, ACM Digital Library, ScienceDirect, PubMed) from the year 2010-2018. The papers were further reviewed based on the various machine learning algorithms used for analyzing health-related text posted on social media platforms.

FINDINGS : Based on the corpus of 148 selected articles, the study finds the types of social media or web-based platforms used for surveillance in the healthcare domain, along with the health topic(s) studied by them. In the corpus of selected articles, we found 26 articles were using machine learning technique. These articles were studied to find commonly used ML techniques. The majority of studies (24%) focused on the surveillance of flu or influenza-like illness(ILI). Twitter (64%) is the most popular data source to perform surveillance research using social media text data, and Support Vector Machine(SVM) (33%) being the most used ML algorithm for text classification.

CONCLUSIONS : The inclusion of online data in surveillance systems has improved the disease prediction ability over traditional syndromic surveillance systems. However, social media based surveillance systems have many limitations and challenges, including noise, demographic bias, privacy issues, etc. Our paper mentions future directions, which can be useful for researchers working in the area. Researchers can use this paper as a library for social media based surveillance systems in the healthcare domain and can expand such systems by incorporating the future works discussed in our paper.

Gupta Aakansha, Katarya Rahul

2020-Jul-02

Health Informatics, Machine Learning, Outbreak Detection, Social Media, Surveillance Systems

Radiology Radiology

Pneumonia Detection in Chest X-Ray Dose-Equivalent CT: Impact of Dose Reduction on Detectability by Artificial Intelligence.

In Academic radiology

RATIONALE AND OBJECTIVES : There has been a significant increase of immunocompromised patients in recent years due to new treatment modalities for previously fatal diseases. This comes at the cost of an elevated risk for infectious diseases, most notably pathogens affecting the respiratory tract. Because early diagnosis and treatment of pneumonia can help reducing morbidity and mortality, we assessed the performance of a deep neural network in the detection of pulmonary infection in chest X-ray dose-equivalent computed tomography (CT).

MATERIALS AND METHODS : The 100 patients included in this retrospective study were referred to our department for suspicion of pulmonary infection and/or follow-up of known pulmonary nodules. Every patient was scanned with a standard dose (1.43 ± 0.54 mSv) and a 20 times dose-reduced (0.07 ± 0.03 mSv) CT protocol. We trained a deep neural network to perform binary classification (pulmonary consolidation present or not) and assessed diagnostic performance on both standard dose and reduced dose CT images.

RESULTS : The areas under the curve of the deep learning algorithm for the standard dose CT was 0.923 (confidence interval [CI] 95%: 0.905-0.941) and significantly higher than the areas under the curve (0.881, CI 95%: 0.859-0.903) of the reduced dose CT (p = 0.001). Sensitivity and specificity of the standard dose CT was 82.9% and 93.8%, and of the reduced dose CT 71.0% and 93.3%.

CONCLUSION : Pneumonia detection with X-ray dose-equivalent CT using artificial intelligence is feasible and may contribute to a more robust and reproducible diagnostic performance. Dose reduction lowered the performance of the deep neural network, which calls for optimization and adaption of CT protocols when using AI algorithms at reduced doses.

Schwyzer Moritz, Martini Katharina, Skawran Stephan, Messerli Michael, Frauenfelder Thomas

2020-Jul-01

Artificial intelligence, Deep learning, Pneumonia, Reduced dose CT

oncology Oncology

Preoperative Pathological Grading of Hepatocellular Carcinoma Using Ultrasomics of Contrast-Enhanced Ultrasound.

In Academic radiology

RATIONALE AND OBJECTIVES : To develop an ultrasomics model for preoperative pathological grading of hepatocellular carcinoma (HCC) using contrast-enhanced ultrasound (CEUS).

MATERIAL AND METHODS : A total of 235 HCCs were retrospectively enrolled, including 65 high-grade and 170 low-grade HCCs. Representative images of four-phase CEUS were selected from the baseline sonography, arterial, portal venous, and delayed phase images. Tumor ultrasomics features were automatically extracted using Ultrasomics-Platform software. Models were built via the classifier support vector machine, including an ultrasomics model using the ultrasomics features, a clinical model using the clinical factors, and a combined model using them both. Model performances were tested in the independent validation cohort considering efficiency and clinical usefulness.

RESULTS : A total of 1502 features were extracted from each image. After the reproducibility test and dimensionality reduction, 25 ultrasomics features and 3 clinical factors were selected to build the models. In the validation cohort, the combined model showed the best predictive power, with an area under the curve value of 0.785 (95% confidence interval [CI] 0.662-0.909), compared to the ultrasomics model of 0.720 (95% CI 0.576-0.864) and the clinical model of 0.665 (95% CI 0.537-0.793). Decision curve analysis suggested that the combined model was clinically useful, with a corresponding net benefit of 0.760 compared to the other two models.

CONCLUSION : We presented an ultrasomics-clinical model based on multiphase CEUS imaging and clinical factors, which showed potential value for the preoperative discrimination of HCC pathological grades.

Wang Wei, Wu Shan-Shan, Zhang Jian-Chao, Xian Meng-Fei, Huang Hui, Li Wei, Zhou Zhuo-Ming, Zhang Chu-Qing, Wu Ting-Fan, Li Xin, Xu Ming, Xie Xiao-Yan, Kuang Ming, Lu Ming-De, Hu Hang-Tong

2020-Jul-01

Contrast-enhanced ultrasound, Hepatocellular carcinoma, Pathological grade, Ultrasomics

Radiology Radiology

Histological Subtypes Classification of Lung Cancers on CT Images Using 3D Deep Learning and Radiomics.

In Academic radiology

RATIONALE AND OBJECTIVES : Histological subtypes of lung cancers are critical for clinical treatment decision. In this study, we attempt to use 3D deep learning and radiomics methods to automatically distinguish lung adenocarcinomas (ADC), squamous cell carcinomas (SCC), and small cell lung cancers (SCLC) respectively on Computed Tomography images, and then compare their performance.

MATERIALS AND METHODS : 920 patients (mean age 61.2, range, 17-87; 340 Female and 580 Male) with lung cancer, including 554 patients with ADC, 175 patients with lung SCC and 191 patients with SCLC, were included in this retrospective study from January 2013 to August 2018. Histopathologic analysis was available for every patient. The classification models based on 3D deep learning (named the ProNet) and radiomics (named com_radNet) were designed to classify lung cancers into the three types mentioned above according to histopathologic results. The training, validation and testing cohorts counted 0.70, 0.15, and 0.15 of the whole datasets respectively.

RESULTS : The ProNet model used to classify the three types of lung cancers achieved the F1-scores of 90.0%, 72.4%, 83.7% in SCC, ADC, and SCLC respectively, and the weighted average F1-score of 73.2%. For com_radNet, the F1-scores achieved 83.1%, 75.4%, 85.1% in SCC, ADC, and SCLC, and the weighted average F1-score was 72.2%. The area under the receiver operating characteristic curve of the ProNet model and com_radNet were 0.840 and 0.789, and the accuracy were 71.6% and 74.7% respectively.

CONCLUSION : The ProNet and com_radNet models we developed can achieve high performance in distinguishing ADC, SCC, and SCLC and may be promising approaches for non-invasive predicting histological subtypes of lung cancers.

Guo Yixian, Song Qiong, Jiang Mengmeng, Guo Yinglong, Xu Peng, Zhang Yiqian, Fu Chi-Cheng, Fang Qu, Zeng Mengsu, Yao Xiuzhong

2020-Jul-01

Computed tomography, Deep learning, Lung cancer, Radiomics, Subtype classification

Radiology Radiology

Image Processing Pipeline for Liver Fibrosis Classification Using Ultrasound Shear Wave Elastography.

In Ultrasound in medicine & biology ; h5-index 42.0

The purpose of this study was to develop an automated method for classifying liver fibrosis stage ≥F2 based on ultrasound shear wave elastography (SWE) and to assess the system's performance in comparison with a reference manual approach. The reference approach consists of manually selecting a region of interest from each of eight or more SWE images, computing the mean tissue stiffness within each of the regions of interest and computing a resulting stiffness value as the median of the means. The 527-subject database consisted of 5526 SWE images and pathologist-scored biopsies, with data collected from a single system at a single site. The automated method integrates three modules that assess SWE image quality, select a region of interest from each SWE measurement and perform machine learning-based, multi-image SWE classification for fibrosis stage ≥F2. Several classification methods were developed and tested using fivefold cross-validation with training, validation and test sets partitioned by subject. Performance metrics were area under receiver operating characteristic curve (AUROC), specificity at 95% sensitivity and number of SWE images required. The final automated method yielded an AUROC of 0.93 (95% confidence interval: 0.90-0.94) versus 0.69 (95% confidence interval: 0.65-0.72) for the reference method, 71% specificity with 95% sensitivity versus 5% and four images per decision versus eight or more. In conclusion, the automated method reported in this study significantly improved the accuracy for ≥F2 classification of SWE measurements as well as reduced the number of measurements needed, which has the potential to reduce clinical workflow.

Brattain Laura J, Ozturk Arinc, Telfer Brian A, Dhyani Manish, Grajo Joseph R, Samir Anthony E

2020-Jul-02

Liver fibrosis, Machine learning, Multi-image classification, Shear wave elastography, Single-image classification

General General

Technical note: Calving prediction in dairy cattle based on continuous measurements of ventral tail base skin temperature using supervised machine learning.

In Journal of dairy science

In this study, we developed a calving prediction model based on continuous measurements of ventral tail base skin temperature (ST) with supervised machine learning and evaluated the predictive ability of the model in 2 dairy farms with distinct cattle management practices. The ST data were collected at 2- or 10-min intervals from 105 and 33 pregnant cattle (mean ± standard deviation: 2.2 ± 1.8 parities) reared in farms A (freestall barn, in a temperate climate) and B (tiestall barn, in a subarctic climate), respectively. After extracting maximum hourly ST, the change in values was expressed as residual ST (rST = actual hourly ST - mean ST for the same hour on the previous 3 d) and analyzed. In both farms, rST decreased in a biphasic manner before calving. Briefly, an ambient temperature-independent gradual decrease occurred from around 36 to 16 h before calving, and an ambient temperature-dependent sharp decrease occurred from around 6 h before until calving. To make a universal calving prediction model, training data were prepared from pregnant cattle under different ambient temperatures (10 data sets were randomly selected from each of the 3 ambient temperature groups: <15°C, ≥15°C to <25°C, and ≥25°C in farm A). An hourly calving prediction model was then constructed with the training data by support vector machine based on 15 features extracted from sensing data (indicative of pre-calving rST changes) and 1 feature from non-sensor-based data (days to expected calving date). When the prediction model was applied to the data that were not part of the training process, calving within the next 24 h was predicted with sensitivities and precisions of 85.3% and 71.9% in farm A (n = 75), and 81.8% and 67.5% in farm B (n = 33), respectively. No differences were observed in means and variances of intervals from the calving alerts to actual calving between farms (12.7 ± 5.8 and 13.0 ± 5.6 h in farms A and B, respectively). Above all, a calving prediction model based on continuous measurement of ST with supervised machine learning has the potential to achieve effective calving prediction, irrespective of the rearing condition in dairy cattle.

Higaki Shogo, Koyama Keisuke, Sasaki Yosuke, Abe Kodai, Honkawa Kazuyuki, Horii Yoichiro, Minamino Tomoya, Mikurino Yoko, Okada Hironao, Miwakeichi Fumikazu, Darhan Hongyu, Yoshioka Koji

2020-Jul-01

body surface temperature, parturition prediction, precision dairy farming, wearable sensor

Pathology Pathology

Accuracy and efficiency of an artificial intelligence tool when counting breast mitoses.

In Diagnostic pathology ; h5-index 35.0

BACKGROUND : The mitotic count in breast carcinoma is an important prognostic marker. Unfortunately substantial inter- and intra-laboratory variation exists when pathologists manually count mitotic figures. Artificial intelligence (AI) coupled with whole slide imaging offers a potential solution to this problem. The aim of this study was to accordingly critique an AI tool developed to quantify mitotic figures in whole slide images of invasive breast ductal carcinoma.

METHODS : A representative H&E slide from 320 breast invasive ductal carcinoma cases was scanned at 40x magnification. Ten expert pathologists from two academic medical centers labeled mitotic figures in whole slide images to train and validate an AI algorithm to detect and count mitoses. Thereafter, 24 readers of varying expertise were asked to count mitotic figures with and without AI support in 140 high-power fields derived from a separate dataset. Their accuracy and efficiency of performing these tasks were calculated and statistical comparisons performed.

RESULTS : For each experience level the accuracy, precision and sensitivity of counting mitoses by users improved with AI support. There were 21 readers (87.5%) that identified more mitoses using AI support and 13 reviewers (54.2%) that decreased the quantity of falsely flagged mitoses with AI. More time was spent on this task for most participants when not provided with AI support. AI assistance resulted in an overall time savings of 27.8%.

CONCLUSIONS : This study demonstrates that pathology end-users were more accurate and efficient at quantifying mitotic figures in digital images of invasive breast carcinoma with the aid of AI. Higher inter-pathologist agreement with AI assistance suggests that such algorithms can also help standardize practice. Not surprisingly, there is much enthusiasm in pathology regarding the prospect of using AI in routine practice to perform mundane tasks such as counting mitoses.

Pantanowitz Liron, Hartman Douglas, Qi Yan, Cho Eun Yoon, Suh Beomseok, Paeng Kyunghyun, Dhir Rajiv, Michelow Pamela, Hazelhurst Scott, Song Sang Yong, Cho Soo Youn

2020-Jul-04

Artificial intelligence, Breast, Carcinoma, Counting, Digital pathology, Informatics, Mitosis, Tumor grade, Whole slide imaging

oncology Oncology

Constructing an automatic diagnosis and severity-classification model for acromegaly using facial photographs by deep learning.

In Journal of hematology & oncology ; h5-index 60.0

Due to acromegaly's insidious onset and slow progression, its diagnosis is usually delayed, thus causing severe complications and treatment difficulty. A convenient screening method is imperative. Based on our previous work, we herein developed a new automatic diagnosis and severity-classification model for acromegaly using facial photographs by deep learning on the data of 2148 photographs at different severity levels. Each photograph was given a score reflecting its severity (range 1~3). Our developed model achieved a prediction accuracy of 90.7% on the internal test dataset and outperformed the performance of ten junior internal medicine physicians (89.0%). The prospect of applying this model to real clinical practices is promising due to its potential health economic benefits.

Kong Yanguo, Kong Xiangyi, He Cheng, Liu Changsong, Wang Liting, Su Lijuan, Gao Jun, Guo Qi, Cheng Ran

2020-Jul-03

Acromegaly, Deep learning, Facial photographs, Severity-classification model

Public Health Public Health

Shape-based Machine Learning Models for the potential Novel COVID-19 protease inhibitors assisted by Molecular Dynamics Simulation.

In Current topics in medicinal chemistry ; h5-index 40.0

BACKGROUND : The vast geographical expansion of novel coronavirus and an increasing number of COVID-19 affected cases has overwhelmed health and public health services. AI and ML algorithms have extended its major role in tracking the disease patterns, and in identifying possible treatment of disease.

OBJECTIVE : To identify potential COVID-19 protease inhibitor through shape-based Machine Learning assisted by Molecular docking and Molecular Dynamics simulation.

METHODS : 31 repurposed compounds have been selected targeting coronavirus protease (6LU7) and a machine learning approach was employed to generate shape-based molecules starting from 3D shape to pharmacophoric features of its seed compound. Ligand-Receptor docking was performed with Optimized Potential for Liquid Simulations (OPLS3) algorithms to identify high-affinity compounds from the list of selected candidates for 6LU7. This compound was subjected to Molecular Dynamic Simulations followed by ADMET studies and other analysis.

RESULTS : Shape-based Machine learning reported Remdesivir, Valrubicin, Aprepitant, and Fulvestrant and a novel therapeutic compound as the best therapeutic agents with the highest affinity for its target protein. Among the best shape-based compounds, the novel theoretical compound was not indexed in any chemical databases (PubChem, Zinc, or ChEMBL). Hence, the novel compound was named 'nCorvEMBS'. Further, toxicity analysis showed nCorv-EMBS to be efficacious that can be qualified as a 6LU7 inhibitor in COVID-19.

CONCLUSION : An effective ACE-II, GAK, AAK1, and protease 3C blockers that can serve a novel therapeutic approach to block the binding and attachment of COVID-19 protein (PDB ID: 6LU7) to the host cell and thus inhibit the infection at AT2 Lung cells. The novel theoretical compound nCorv-EMBS herein proposed stands as a promising inhibitor that can be extended for entering phases of clinical trials for COVID-19 treatment.

Khandelwal Ravina, Nayarisseri Anuraj, Madhavi Maddala, Selvaraj Chandrabose, Panwar Umesh, Sharma Khushboo, Hussain Tajamul, Singh Sanjeev Kumar

2020-Jul-04

COVID-19, COVID-19 protease inhibitors, Machine Learning, Molecular\nDynamics Simulation, Molecular Docking, Remdesivir, Shape-based ML, nCorv-EMBS.

General General

Deep white matter analysis (DeepWMA): Fast and consistent tractography segmentation.

In Medical image analysis

White matter tract segmentation, i.e. identifying tractography fibers (streamline trajectories) belonging to anatomically meaningful fiber tracts, is an essential step to enable tract quantification and visualization. In this study, we present a deep learning tractography segmentation method (DeepWMA) that allows fast and consistent identification of 54 major deep white matter fiber tracts from the whole brain. We create a large-scale training tractography dataset of 1 million labeled fiber samples, and we propose a novel 2D multi-channel feature descriptor (FiberMap) that encodes spatial coordinates of points along each fiber. We learn a convolutional neural network (CNN) fiber classification model based on FiberMap and obtain a high fiber classification accuracy of 90.99% on the training tractography data with ground truth fiber labels. Then, the method is evaluated on a test dataset of 597 diffusion MRI scans from six independently acquired populations across genders, the lifespan (1 day - 82 years), and different health conditions (healthy control, neuropsychiatric disorders, and brain tumor patients). We perform comparisons with two state-of-the-art tract segmentation methods. Experimental results show that our method obtains a highly consistent tract segmentation result, where on average over 99% of the fiber tracts are successfully identified across all subjects under study, most importantly, including neonates and patients with space-occupying brain tumors. We also demonstrate good generalization of the method to tractography data from multiple different fiber tracking methods. The proposed method leverages deep learning techniques and provides a fast and efficient tool for brain white matter segmentation in large diffusion MRI tractography datasets.

Zhang Fan, Cetin Karayumak Suheyla, Hoffmann Nico, Rathi Yogesh, Golby Alexandra J, O’Donnell Lauren J

2020-Jun-24

oncology Oncology

A comprehensive overview of promising biomarkers in stage II colorectal cancer.

In Cancer treatment reviews

Colon cancer (CC) has the highest incidence rate among gastrointestinal cancers and ranks the third in mortality among all cancers, which contributes to the current CC burden and constitutes a major public health issue. While therapeutic strategies for stage I, III, and IV CC are standardized, those for stage II CC remain debatable. The choice of adjuvant chemotherapy for patients with stage II CC depends on stage (pT4) and grade (high) of the disease, the presence of venous, perinervous, and/or lymphatic emboli, or the need of suboptimal surgery (tumor with initial occlusion or perforation needing emergency surgeries, <12 lymph nodes harvested). Several prognostic factors that have been validated in retrospective studies can potentially define a population of CC patients at low and high-risk for reccurence. The role of biomarkers is becoming increasingly important for the future personalized treatment options. We conducted a systematic overview of potential prognostic biomarkers with possible clinical implications in stage II CC.

Parent Pauline, Cohen Romain, Rassy Elie, Svrcek Magali, Taieb Julien, André Thierry, Turpin Anthony

2020-Jun-23

Adjuvant chemotherapy, Artificial intelligence, Carcinoembryonic antigen, Circulating Tumor DNA, Colorectal cancer, Immunoscore

General General

Modeling the ecological status response of rivers to multiple stressors using machine learning: A comparison of environmental DNA metabarcoding and morphological data.

In Water research

Understanding the ecological status response of rivers to multiple stressors is a precondition for river restoration and management. However, this requires the collection of appropriate data, including environmental variables and the status of aquatic organisms, and analysis via a suitable model that captures the nonlinear relationships between ecological status and various stressors. The morphological approach has been the standard data collection method employed for establishing the status of aquatic organisms. However, this approach is very laborious and restricted to a specific set of organisms. Recently, an environmental DNA (eDNA) metabarcoding data approach has been developed that is far more efficient than the morphological approach and potentially applicable to an unlimited set of organisms. However, it remains unclear how well eDNA metabarcoding data reflects the impacts of environmental stressors on aquatic ecosystems compared with morphological data, which is essential for clarifying the potential applications of eDNA metabarcoding data in the ecological monitoring and management of rivers. The present work addresses this issue by modeling organism diversity based on three indices with respect to multiple environmental variables in both the catchment and reach scales. This is done by corresponding support vector machine (SVM) models constructed from eDNA metabarcoding and morphological data on 24 sampling locations in the Taizi River basin, China. According to the mean absolute percent error (MAPE) between the measured diversity index values and the index values predicted by the SVM models, the SVM models constructed from eDNA metabarcoding data (MAPE = 3.87) provide more accurate predictions than the SVM models constructed from morphological data (MAPE = 28.36), revealing that the eDNA metabarcoding data better reflects environmental conditions. In addition, the sensitivity of SVM model predictions of the ecological indices for both catchment-scale and reach-scale stressors is evaluated, and the stressors having the greatest impact on the ecological status of rivers are identified. The results demonstrate that the ecological status of rivers is more sensitive to environmental stressors at the reach scale than to stressors at the catchment scale. Therefore, our study is helpful in exploring the potential applications of eDNA metabarcoding data and SVM modeling in the ecological monitoring and management of rivers.

Fan Juntao, Wang Shuping, Li Hong, Yan Zhenguang, Zhang Yizhang, Zheng Xin, Wang Pengyuan

2020-Jun-15

Biomonitoring, Environmental DNA, Freshwater ecosystem, Machine learning, Modeling

Internal Medicine Internal Medicine

MED-TMA: A clinical decision support tool for differential diagnosis of TMA with enhanced accuracy using an ensemble method.

In Thrombosis research ; h5-index 46.0

Considering difficulties in on-site ADAMTS13 testing and the performance instability of PLASMIC score according to ethnicity, we developed a prediction tool, MED-TMA (machine learning (ML) method for differential diagnosis (DDx) of thrombotic microangiopathy (TMA)) to support clinical decision. Data from 319 patients visiting 31 hospitals in Korea clinically diagnosed with primary TMA was randomly separated by 2:1 into two groups - the development dataset (D-set, n = 212), the validation dataset (V-set, n = 107). Feature elimination was conducted to select optimal clinical predictors. We developed the model with the selected features using ML methods, verifying using V-set. After the feature elimination using 19 clinical variables, five variables were selected with high importance value. Among nine ML methods, four ML methods were chosen considering the Area Under the Curves (AUC) and the correlation between the methods using D-set. We developed MED-TMA based on an optimized ensemble model with the selected four ML methods resulting in AUC values of 0.945 and 0.924 in D-set and V-set, respectively. In addition to the binary outcome, MED-TMA was capable of providing a probability for DDx of TMA. The ensemble approach driven MED-TMA showed comparable accurate and intuitive decision support for DDx of TMA to that of the existing models based on a single ML method. We provide a web-based nomogram for the appropriate use of effective but costly therapeutics to treat TMA patients (http://hematology.snu.ac.kr/medtma/).

Yoon Jeesun, Lee Sungyoung, Sun Choong-Hyun, Kim Daeyoon, Kim Inho, Yoon Sung-Soo, Oh Doyeun, Yun Hongseok, Koh Youngil

2020-Jun-27

Atypical hemolytic uremic syndrome, Ensemble, Machine learning, Thrombotic microangiopathy, Thrombotic thrombocytopenic purpura

Pathology Pathology

How to do things with (thousands of) words: Computational approaches to discourse analysis in Alzheimer's disease.

In Cortex; a journal devoted to the study of the nervous system and behavior

Natural Language Processing (NLP) is an ever-growing field of computational science that aims to model natural human language. Combined with advances in machine learning, which learns patterns in data, it offers practical capabilities including automated language analysis. These approaches have garnered interest from clinical researchers seeking to understand the breakdown of language due to pathological changes in the brain, offering fast, replicable and objective methods. The study of Alzheimer's disease (AD), and preclinical Mild Cognitive Impairment (MCI), suggests that changes in discourse (connected speech or writing) may be key to early detection of disease. There is currently no disease-modifying treatment for AD, the leading cause of dementia in people over the age of 65, but detection of those at risk of developing the disease could help with the identification and testing of medications which can take effect before the underlying pathology has irreversibly spread. We outline important components of natural language, as well as NLP tools and approaches with which they can be extracted, analysed and used for disease identification and risk prediction. We review literature using these tools to model discourse across the spectrum of AD, including the contribution of machine learning approaches and Automatic Speech Recognition (ASR). We conclude that NLP and machine learning techniques are starting to greatly enhance research in the field, with measurable and quantifiable language components showing promise for early detection of disease, but there remain research and practical challenges for clinical implementation of these approaches. Challenges discussed include the availability of large and diverse datasets, ethics of data collection and sharing, diagnostic specificity and clinical acceptability.

Clarke Natasha, Foltz Peter, Garrard Peter

2020-May-19

“Alzheimers disease”, Discourse, Machine learning, Mild Cognitive Impairment, Natural Language Processing

oncology Oncology

Using Deep Learning to Predict Beam-Tunable Pareto Optimal Dose Distribution for Intensity Modulated Radiation Therapy.

In Medical physics ; h5-index 59.0

PURPOSE : Many researchers have developed deep learning models for predicting clinical dose distributions and Pareto optimal dose distributions. Models for predicting Pareto optimal dose distributions have generated optimal plans in real time using anatomical structures and static beam orientations. However, Pareto optimal dose prediction for Intensity Modulated Radiation Therapy (IMRT) prostate planning with variable beam numbers and orientations has not yet been investigated. We propose to develop a deep learning model that can predict Pareto optimal dose distributions by using any given set of beam angles, along with patient anatomy, as input to train the deep neural networks. We implement and compare two deep learning networks that predict with two different beam configuration modalities.

METHODS : We generated Pareto optimal plans for 70 patients with prostate cancer. We used fluence map optimization to generate 500 IMRT plans that sampled the Pareto surface for each patient, for a total of 35,000 plans. We studied and compared two different models, Model I and Model II. Although they both used the same anatomical structures-including the planning target volume (PTV), organs at risk (OARs), and body-these models were designed with two different methods for representing beam angles. Model I directly uses beam angles as a second input to the network as a binary vector. Model II converts the beam angles into beam doses that are conformal to the PTV. We divided the 70 patients into 54 training, 6 validation, and 10 testing patients, thus yielding 27,000 training, 3,000 validation, and 5,000 testing plans. Mean square loss (MSE) was taken as the loss function. We used the Adam optimizer with a default learning rate of 0.01 to optimize the network's performance. We evaluated the models' performance by comparing their predicted dose distributions with the ground truth (Pareto optimal) dose distribution, in terms of DVH plots and evaluation metrics such as PTV D98 , D95 , D50 , D2 , Dmax , Dmean , Paddick Conformation Number, R50 and Homogeneity index.

RESULTS : Our deep learning models predicted voxel-level dose distributions that precisely matched the ground truth dose distributions. The DVHs generated also precisely matched the ground truth. Evaluation metrics such as PTV statistics, dose conformity, dose spillage (R50) and homogeneity index also confirmed the accuracy of PTV curves on the DVH. Quantitatively, Model I's prediction error of 0.043 (confirmation), 0.043 (homogeneity), 0.327 (R50), 2.80% (D95), 3.90% (D98), 0.6% (D50), 1.10% (D2) was lower than that of Model II, which obtained 0.076 (confirmation), 0.058 (homogeneity), 0.626 (R50), 7.10% (D95), 6.50% (D98), 8.40% (D50), 6.30% (D2). Model I also outperformed Model II in terms of the mean dose error and the max dose error on the PTV, bladder, rectum, left femoral head, and right femoral head.

CONCLUSIONS : Treatment planners who use our models will be able to use deep learning to control the tradeoffs between the PTV and OAR weights, as well as the beam number and configurations in real time. Our dose prediction methods provide a stepping stone to building automatic IMRT treatment planning.

Bohara Gyanendra, Sadeghnejad Barkousaraie Azar, Jiang Steve, Nguyen Dan

2020-Jul-04

Public Health Public Health

Shape-based Machine Learning Models for the potential Novel COVID-19 protease inhibitors assisted by Molecular Dynamics Simulation.

In Current topics in medicinal chemistry ; h5-index 40.0

BACKGROUND : The vast geographical expansion of novel coronavirus and an increasing number of COVID-19 affected cases has overwhelmed health and public health services. AI and ML algorithms have extended its major role in tracking the disease patterns, and in identifying possible treatment of disease.

OBJECTIVE : To identify potential COVID-19 protease inhibitor through shape-based Machine Learning assisted by Molecular docking and Molecular Dynamics simulation.

METHODS : 31 repurposed compounds have been selected targeting coronavirus protease (6LU7) and a machine learning approach was employed to generate shape-based molecules starting from 3D shape to pharmacophoric features of its seed compound. Ligand-Receptor docking was performed with Optimized Potential for Liquid Simulations (OPLS3) algorithms to identify high-affinity compounds from the list of selected candidates for 6LU7. This compound was subjected to Molecular Dynamic Simulations followed by ADMET studies and other analysis.

RESULTS : Shape-based Machine learning reported Remdesivir, Valrubicin, Aprepitant, and Fulvestrant and a novel therapeutic compound as the best therapeutic agents with the highest affinity for its target protein. Among the best shape-based compounds, the novel theoretical compound was not indexed in any chemical databases (PubChem, Zinc, or ChEMBL). Hence, the novel compound was named 'nCorvEMBS'. Further, toxicity analysis showed nCorv-EMBS to be efficacious that can be qualified as a 6LU7 inhibitor in COVID-19.

CONCLUSION : An effective ACE-II, GAK, AAK1, and protease 3C blockers that can serve a novel therapeutic approach to block the binding and attachment of COVID-19 protein (PDB ID: 6LU7) to the host cell and thus inhibit the infection at AT2 Lung cells. The novel theoretical compound nCorv-EMBS herein proposed stands as a promising inhibitor that can be extended for entering phases of clinical trials for COVID-19 treatment.

Khandelwal Ravina, Nayarisseri Anuraj, Madhavi Maddala, Selvaraj Chandrabose, Panwar Umesh, Sharma Khushboo, Hussain Tajamul, Singh Sanjeev Kumar

2020-Jul-04

COVID-19, COVID-19 protease inhibitors, Machine Learning, Molecular\nDynamics Simulation, Molecular Docking, Remdesivir, Shape-based ML, nCorv-EMBS.

Ophthalmology Ophthalmology

Optic disc classification by deep learning versus expert neuro-ophthalmologists.

In Annals of neurology ; h5-index 85.0

OBJECTIVE : To compare the diagnostic performance of an artificial intelligence deep learning system with that of expert neuro-ophthalmologists in classifying optic disc appearance.

METHODS : The deep learning system was previously trained and validated on 14,341 ocular fundus photographs from 19 international centers. The performance of the system was evaluated on 800 new fundus photographs (400 normal optic discs, 201 papilledema [disc edema from elevated intracranial pressure], 199 other optic disc abnormalities) and compared with that of two expert neuro-ophthalmologists who independently reviewed the same randomly-presented images without clinical information. Area-under-the-receiver-operating-characteristic-curve, accuracy, sensitivity and specificity were calculated.

RESULTS : The system correctly classified 678/800 (84.7%) photographs, compared with 675/800 (84.4%) for Expert 1 and 641/800 (80.1%) for Expert 2. The system yielded area-under-the-receiver-operating-characteristic-curves of 0.97 (CI 95%, 0.96 - 0.98), 0.96 (CI 95%, 0.94 - 0.97) and 0.89 (CI 95%, 0.87 - 0.92) for the detection of normal discs, papilledema and other disc abnormalities, respectively. The accuracy, sensitivity and specificity of the system's classification of optic discs were similar or better than the two experts. Inter-grader agreement at the eye level was 0.71 (CI 95%, 0.67-0.76) between Expert 1 and Expert 2, 0.72 (CI 95%, 0.68-0.76) between the system and Expert 1, and 0.65 (CI 95%, 0.61-0.70) between the system and Expert 2.

INTERPRETATION : The performance of this deep learning system at classifying optic disc abnormalities was at least as good as two expert neuro-ophthalmologists. Future prospective studies are needed to validate this system as a diagnostic aid in relevant clinical settings. This article is protected by copyright. All rights reserved.

Biousse Valérie, Newman Nancy J, Najjar Raymond P, Vasseneix Caroline, Xu Xinxing, Ting Daniel S, Milea Léonard B, Hwang Jeong-Min, Kim Dong Hyun, Yang Hee Kyung, Hamann Steffen, Chen John J, Liu Yong, Wong Tien Yin, Milea Dan

2020-Jul-03

Radiology Radiology

Automated detection of pulmonary embolism in CT pulmonary angiograms using an AI-powered algorithm.

In European radiology ; h5-index 62.0

OBJECTIVES : To evaluate the performance of an AI-powered algorithm for the automatic detection of pulmonary embolism (PE) on chest computed tomography pulmonary angiograms (CTPAs) on a large dataset.

METHODS : We retrospectively identified all CTPAs conducted at our institution in 2017 (n = 1499). Exams with clinical questions other than PE were excluded from the analysis (n = 34). The remaining exams were classified into positive (n = 232) and negative (n = 1233) for PE based on the final written reports, which defined the reference standard. The fully anonymized 1-mm series in soft tissue reconstruction served as input for the PE detection prototype algorithm that was based on a deep convolutional neural network comprising a Resnet architecture. It was trained and validated on 28,000 CTPAs acquired at other institutions. The result series were reviewed using a web-based feedback platform. Measures of diagnostic performance were calculated on a per patient and a per finding level.

RESULTS : The algorithm correctly identified 215 of 232 exams positive for pulmonary embolism (sensitivity 92.7%; 95% confidence interval [CI] 88.3-95.5%) and 1178 of 1233 exams negative for pulmonary embolism (specificity 95.5%; 95% CI 94.2-96.6%). On a per finding level, 1174 of 1352 findings marked as embolus by the algorithm were true emboli. Most of the false positive findings were due to contrast agent-related flow artifacts, pulmonary veins, and lymph nodes.

CONCLUSION : The AI prototype algorithm we tested has a high degree of diagnostic accuracy for the detection of PE on CTPAs. Sensitivity and specificity are balanced, which is a prerequisite for its clinical usefulness.

KEY POINTS : • An AI-based prototype algorithm showed a high degree of diagnostic accuracy for the detection of pulmonary embolism on CTPAs. • It can therefore help clinicians to automatically prioritize exams with a high suspection of pulmonary embolism and serve as secondary reading tool. • By complementing traditional ways of worklist prioritization in radiology departments, this can speed up the diagnostic and therapeutic workup of patients with pulmonary embolism and help to avoid false negative calls.

Weikert Thomas, Winkel David J, Bremerich Jens, Stieltjes Bram, Parmar Victor, Sauter Alexander W, Sommer Gregor

2020-Jul-03

Artificial intelligence, Computed tomography angiography, Computer-assisted image processing, Pulmonary embolism

General General

The MULTICOM Protein Structure Prediction Server Empowered by Deep Learning and Contact Distance Prediction.

In Methods in molecular biology (Clifton, N.J.)

Prediction of the three-dimensional (3D) structure of a protein from its sequence is important for studying its biological function. With the advancement in deep learning contact distance prediction and residue-residue coevolutionary analysis, significant progress has been made in both template-based and template-free protein structure prediction in the last several years. Here, we provide a practical guide for our latest MULTICOM protein structure prediction system built on top of the latest advances, which was rigorously tested in the 2018 CASP13 experiment. Its specific functionalities include: (1) prediction of 1D structural features (secondary structure, solvent accessibility, disordered regions) and 2D interresidue contacts; (2) domain boundary prediction; (3) template-based (or homology) 3D structure modeling; (4) contact distance-driven ab initio 3D structure modeling; and (5) large-scale protein quality assessment enhanced by deep learning and predicted contacts. The MULTICOM web server ( http://sysbio.rnet.missouri.edu/multicom_cluster/ ) presents all the 1D, 2D, and 3D prediction results and quality assessment to users via user-friendly web interfaces and e-mails. The source code of the MULTICOM package is also available at https://github.com/multicom-toolbox/multicom .

Hou Jie, Wu Tianqi, Guo Zhiye, Quadir Farhan, Cheng Jianlin

2020

Deep learning, Fold recognition, Protein contact prediction, Protein distance prediction, Protein domain, Protein quality assessment, Protein structure prediction

General General

Machine learning prediction of stone-free success in patients with urinary stone after treatment of shock wave lithotripsy.

In BMC urology

BACKGROUND : The aims of this study were to determine the predictive value of decision support analysis for the shock wave lithotripsy (SWL) success rate and to analyze the data obtained from patients who underwent SWL to assess the factors influencing the outcome by using machine learning methods.

METHODS : We retrospectively reviewed the medical records of 358 patients who underwent SWL for urinary stone (kidney and upper-ureter stone) between 2015 and 2018 and evaluated the possible prognostic features, including patient population characteristics, urinary stone characteristics on a non-contrast, computed tomographic image. We performed 80% training set and 20% test set for the predictions of success and mainly used decision tree-based machine learning algorithms, such as random forest (RF), extreme gradient boosting trees (XGBoost), and light gradient boosting method (LightGBM).

RESULTS : In machine learning analysis, the prediction accuracies for stone-free were 86.0, 87.5, and 87.9%, and those for one-session success were 78.0, 77.4, and 77.0% using RF, XGBoost, and LightGBM, respectively. In predictions for stone-free, LightGBM yielded the best accuracy and RF yielded the best one in those for one-session success among those methods. The sensitivity and specificity values for machine learning analytics are (0.74 to 0.78 and 0.92 to 0.93) for stone-free and (0.79 to 0.81 and 0.74 to 0.75) for one-session success, respectively. The area under curve (AUC) values for machine learning analytics are (0.84 to 0.85) for stone-free and (0.77 to 0.78) for one-session success and their 95% confidence intervals (CIs) are (0.730 to 0.933) and (0.673 to 0.866) in average of methods, respectively.

CONCLUSIONS : We applied a selected machine learning analysis to predict the result after treatment of SWL for urinary stone. About 88% accurate machine learning based predictive model was evaluated. The importance of machine learning algorithm can give matched insights to domain knowledge on effective and influential factors for SWL success outcomes.

Yang Seung Woo, Hyon Yun Kyong, Na Hyun Seok, Jin Long, Lee Jae Geun, Park Jong Mok, Lee Ji Yong, Shin Ju Hyun, Lim Jae Sung, Na Yong Gil, Jeon Kiwan, Ha Taeyoung, Kim Jinbum, Song Ki Hak

2020-Jul-03

Artificial intelligence, Lithotripsy, Machine learning

Pathology Pathology

Digital pathology and artificial intelligence will be key to supporting clinical and academic cellular pathology through COVID-19 and future crises: the PathLAKE consortium perspective.

In Journal of clinical pathology

The measures to control the COVID-19 outbreak will likely remain a feature of our working lives until a suitable vaccine or treatment is found. The pandemic has had a substantial impact on clinical services, including cancer pathways. Pathologists are working remotely in many circumstances to protect themselves, colleagues, family members and the delivery of clinical services. The effects of COVID-19 on research and clinical trials have also been significant with changes to protocols, suspensions of studies and redeployment of resources to COVID-19. In this article, we explore the specific impact of COVID-19 on clinical and academic pathology and explore how digital pathology and artificial intelligence can play a key role to safeguarding clinical services and pathology-based research in the current climate and in the future.

Browning Lisa, Colling Richard, Rakha Emad, Rajpoot Nasir, Rittscher Jens, James Jacqueline A, Salto-Tellez Manuel, Snead David R J, Verrill Clare

2020-Jul-03

computer systems, image processing, computer-assisted, pathology, surgical

General General

Automatic semantic segmentation for prediction of tuberculosis using lens-free microscopy images

ArXiv Preprint

Tuberculosis (TB), caused by a germ called Mycobacterium tuberculosis, is one of the most serious public health problems in Peru and the world. The development of this project seeks to facilitate and automate the diagnosis of tuberculosis by the MODS method and using lens-free microscopy, due they are easier to calibrate and easier to use (by untrained personnel) in comparison with lens microscopy. Thus, we employ a U-Net network in our collected dataset to perform the automatic segmentation of the TB cords in order to predict tuberculosis. Our initial results show promising evidence for automatic segmentation of TB cords.

Dennis Núñez-Fernández, Lamberto Ballan, Gabriel Jiménez-Avalos, Jorge Coronel, Mirko Zimic

2020-07-06

General General

Structural Modeling and Ligand-Binding Prediction for Analysis of Structure-Unknown and Function-Unknown Proteins Using FORTE Alignment and PoSSuM Pocket Search.

In Methods in molecular biology (Clifton, N.J.)

Structural data of biomolecules, such as those of proteins and nucleic acids, provide much information for estimation of their functions. For structure-unknown proteins, structure information is obtainable by modeling their structures based on sequence similarity of proteins. Moreover, information related to ligands or ligand-binding sites is necessary to elucidate protein functions because the binding of ligands can engender not only the activation and inactivation of the proteins but also the modification of protein functions. This chapter presents methods using our profile-profile alignment server FORTE and the PoSSuM ligand-binding site database for prediction of the structure and potential ligand-binding sites of structure-unknown and function-unknown proteins, aimed at protein function prediction.

Tsuchiya Yuko, Tomii Kentaro

2020

Function prediction, Homology/comparative modeling, Pocket detection, Potential ligand-binding site prediction, Profile–profile alignment

Surgery Surgery

Current applications of artificial intelligence for intraoperative decision support in surgery.

In Frontiers of medicine

Research into medical artificial intelligence (AI) has made significant advances in recent years, including surgical applications. This scoping review investigated AI-based decision support systems targeted at the intraoperative phase of surgery and found a wide range of technological approaches applied across several surgical specialties. Within the twenty-one (n = 21) included papers, three main categories of motivations were identified for developing such technologies: (1) augmenting the information available to surgeons, (2) accelerating intraoperative pathology, and (3) recommending surgical steps. While many of the proposals hold promise for improving patient outcomes, important methodological shortcomings were observed in most of the reviewed papers that made it difficult to assess the clinical significance of the reported performance statistics. Despite limitations, the current state of this field suggests that a number of opportunities exist for future researchers and clinicians to work on AI for surgical decision support with exciting implications for improving surgical care.

Navarrete-Welton Allison J, Hashimoto Daniel A

2020-Jul-03

artificial intelligence, clinical decision support systems, computer vision, decision support, deep learning, intraoperative, machine learning, surgery

General General

An ensemble learning based hybrid model and framework for air pollution forecasting.

In Environmental science and pollution research international

As advance of economy and industry, the impact of air pollution has gradually gained attention. In order to predict air quality, there were many studies that exploited various machine learning techniques to build predictive model for pollutant concentration or air quality prediction. However, enhancing the prediction performance always is the common problem of existing studies. Traditional templates based on machine learning and deep learning methods, such as GBTR (gradient boosted tree regression), SVR (support vector machine-based regression), and LSTM (long short-term memory), are most promising approaches to address these problems. Some previous researches showed that ensemble learning technology can improve predictive performance of other domains. In order to improve the accuracy of forecasting, in this paper, we propose a hybrid model and framework to improve the forecasting accuracy of air pollution. We not only exploit stacking-based ensemble learning scheme with Pearson correlation coefficient to calculate the correlation between different machine learning models to integrate various forecasting models together, but also construct a framework based on Spark+Hadoop machine learning and TensorFlow deep learning framework to physically integrate these models to demonstrate the next 1 to 8 h' air pollution forecasting. We also conduct experiments and compare the result with GBTR, SVR, LSTM, and LSTM2 (version 2) models to demonstrate the proposed hybrid model's predictive performance. The experimental results show that the hybrid model is superior to the existing models used for predicting air pollution.

Chang Yue-Shan, Abimannan Satheesh, Chiao Hsin-Ta, Lin Chi-Yeh, Huang Yo-Ping

2020-Jul-03

Air pollution forecasting, Ensemble learning, GBTR, LSTM, PM2.5, Pearson correlation coefficient, SVR

General General

A mapping study of ensemble classification methods in lung cancer decision support systems.

In Medical & biological engineering & computing ; h5-index 32.0

Achieving a high level of classification accuracy in medical datasets is a capital need for researchers to provide effective decision systems to assist doctors in work. In many domains of artificial intelligence, ensemble classification methods are able to improve the performance of single classifiers. This paper reports the state of the art of ensemble classification methods in lung cancer detection. We have performed a systematic mapping study to identify the most interesting papers concerning this topic. A total of 65 papers published between 2000 and 2018 were selected after an automatic search in four digital libraries and a careful selection process. As a result, it was observed that diagnosis was the task most commonly studied; homogeneous ensembles and decision trees were the most frequently adopted for constructing ensembles; and the majority voting rule was the predominant combination rule. Few studies considered the parameter tuning of the techniques used. These findings open several perspectives for researchers to enhance lung cancer research by addressing the identified gaps, such as investigating different classification methods, proposing other heterogeneous ensemble methods, and using new combination rules. Graphical abstract Main features of the mapping study performed in ensemble classification methods applied on lung cancer decision support systems.

Hosni Mohamed, García-Mateos Ginés, Carrillo-de-Gea Juan M, Idri Ali, Fernández-Alemán José Luis

2020-Jul-03

Classification, Decision support systems, Ensemble methods, Lung cancer, Machine learning

General General

Demystifying artificial intelligence in pharmacy.

In American journal of health-system pharmacy : AJHP : official journal of the American Society of Health-System Pharmacists

PURPOSE : To provide pharmacists and other clinicians with a basic understanding of the underlying principles and practical applications of artificial intelligence (AI) in the medication-use process.

SUMMARY : "Artificial intelligence" is a general term used to describe the theory and development of computer systems to perform tasks that normally would require human cognition, such as perception, language understanding, reasoning, learning, planning, and problem solving. Following the fundamental theorem of informatics, a better term for AI would be "augmented intelligence," or leveraging the strengths of computers and the strengths of clinicians together to obtain improved outcomes for patients. Understanding the vocabulary of and methods used in AI will help clinicians productively communicate with data scientists to collaborate on developing models that augment patient care. This primer includes discussion of approaches to identifying problems in practice that could benefit from application of AI and those that would not, as well as methods of training, validating, implementing, evaluating, and maintaining AI models. Some key limitations of AI related to the medication-use process are also discussed.

CONCLUSION : As medication-use domain experts, pharmacists play a key role in developing and evaluating AI in healthcare. An understanding of the core concepts of AI is necessary to engage in collaboration with data scientists and critically evaluating its place in patient care, especially as clinical practice continues to evolve and develop.

Nelson Scott D, Walsh Colin G, Olsen Casey A, McLaughlin Andrew J, LeGrand Joseph R, Schutz Nick, Lasko Thomas A

2020-Jul-04

artificial intelligence, machine learning, medical decision making, medication systems, neural networks, prediction

General General

A new method to control error rates in automated species identification with deep learning algorithms.

In Scientific reports ; h5-index 158.0

Processing data from surveys using photos or videos remains a major bottleneck in ecology. Deep Learning Algorithms (DLAs) have been increasingly used to automatically identify organisms on images. However, despite recent advances, it remains difficult to control the error rate of such methods. Here, we proposed a new framework to control the error rate of DLAs. More precisely, for each species, a confidence threshold was automatically computed using a training dataset independent from the one used to train the DLAs. These species-specific thresholds were then used to post-process the outputs of the DLAs, assigning classification scores to each class for a given image including a new class called "unsure". We applied this framework to a study case identifying 20 fish species from 13,232 underwater images on coral reefs. The overall rate of species misclassification decreased from 22% with the raw DLAs to 2.98% after post-processing using the thresholds defined to minimize the risk of misclassification. This new framework has the potential to unclog the bottleneck of information extraction from massive digital data while ensuring a high level of accuracy in biodiversity assessment.

Villon Sébastien, Mouillot David, Chaumont Marc, Subsol Gérard, Claverie Thomas, Villéger Sébastien

2020-Jul-03

General General

Automated design of a convolutional neural network with multi-scale filters for cost-efficient seismic data classification.

In Nature communications ; h5-index 260.0

Geoscientists mainly identify subsurface geologic features using exploration-derived seismic data. Classification or segmentation of 2D/3D seismic images commonly relies on conventional deep learning methods for image recognition. However, complex reflections of seismic waves tend to form high-dimensional and multi-scale signals, making traditional convolutional neural networks (CNNs) computationally costly. Here we propose a highly efficient and resource-saving CNN architecture (SeismicPatchNet) with topological modules and multi-scale-feature fusion units for classifying seismic data, which was discovered by an automated data-driven search strategy. The storage volume of the architecture parameters (0.73 M) is only ~2.7 MB, ~0.5% of the well-known VGG-16 architecture. SeismicPatchNet predicts nearly 18 times faster than ResNet-50 and shows an overwhelming advantage in identifying Bottom Simulating Reflection (BSR), an indicator of marine gas-hydrate resources. Saliency mapping demonstrated that our architecture captured key features well. These results suggest the prospect of end-to-end interpretation of multiple seismic datasets at extremely low computational cost.

Geng Zhi, Wang Yanfei

2020-Jul-03

General General

Structure-based machine-guided mapping of amyloid sequence space reveals uncharted sequence clusters with higher solubilities.

In Nature communications ; h5-index 260.0

The amyloid conformation can be adopted by a variety of sequences, but the precise boundaries of amyloid sequence space are still unclear. The currently charted amyloid sequence space is strongly biased towards hydrophobic, beta-sheet prone sequences that form the core of globular proteins and by Q/N/Y rich yeast prions. Here, we took advantage of the increasing amount of high-resolution structural information on amyloid cores currently available in the protein databank to implement a machine learning approach, named Cordax (https://cordax.switchlab.org), that explores amyloid sequence beyond its current boundaries. Clustering by t-Distributed Stochastic Neighbour Embedding (t-SNE) shows how our approach resulted in an expansion away from hydrophobic amyloid sequences towards clusters of lower aliphatic content and higher charge, or regions of helical and disordered propensities. These clusters uncouple amyloid propensity from solubility representing sequence flavours compatible with surface-exposed patches in globular proteins, functional amyloids or sequences associated to liquid-liquid phase transitions.

Louros Nikolaos, Orlando Gabriele, De Vleeschouwer Matthias, Rousseau Frederic, Schymkowitz Joost

2020-Jul-03

Pathology Pathology

PARGT: a software tool for predicting antimicrobial resistance in bacteria.

In Scientific reports ; h5-index 158.0

With the ever-increasing availability of whole-genome sequences, machine-learning approaches can be used as an alternative to traditional alignment-based methods for identifying new antimicrobial-resistance genes. Such approaches are especially helpful when pathogens cannot be cultured in the lab. In previous work, we proposed a game-theory-based feature evaluation algorithm. When using the protein characteristics identified by this algorithm, called 'features' in machine learning, our model accurately identified antimicrobial resistance (AMR) genes in Gram-negative bacteria. Here we extend our study to Gram-positive bacteria showing that coupling game-theory-identified features with machine learning achieved classification accuracies between 87% and 90% for genes encoding resistance to the antibiotics bacitracin and vancomycin. Importantly, we present a standalone software tool that implements the game-theory algorithm and machine-learning model used in these studies.

Chowdhury Abu Sayed, Call Douglas R, Broschat Shira L

2020-Jul-03

General General

Machine learning-based prediction of acute severity in infants hospitalized for bronchiolitis: a multicenter prospective study.

In Scientific reports ; h5-index 158.0

We aimed to develop machine learning models to accurately predict bronchiolitis severity, and to compare their predictive performance with a conventional scoring (reference) model. In a 17-center prospective study of infants (aged < 1 year) hospitalized for bronchiolitis, by using routinely-available pre-hospitalization data as predictors, we developed four machine learning models: Lasso regression, elastic net regression, random forest, and gradient boosted decision tree. We compared their predictive performance-e.g., area-under-the-curve (AUC), sensitivity, specificity, and net benefit (decision curves)-using a cross-validation method, with that of the reference model. The outcomes were positive pressure ventilation use and intensive treatment (admission to intensive care unit and/or positive pressure ventilation use). Of 1,016 infants, 5.4% underwent positive pressure ventilation and 16.0% had intensive treatment. For the positive pressure ventilation outcome, machine learning models outperformed reference model (e.g., AUC 0.88 [95% CI 0.84-0.93] in gradient boosted decision tree vs 0.62 [95% CI 0.53-0.70] in reference model), with higher sensitivity (0.89 [95% CI 0.80-0.96] vs. 0.62 [95% CI 0.49-0.75]) and specificity (0.77 [95% CI 0.75-0.80] vs. 0.57 [95% CI 0.54-0.60]). The machine learning models also achieved a greater net benefit over ranges of clinical thresholds. Machine learning models consistently demonstrated a superior ability to predict acute severity and achieved greater net benefit.

Raita Yoshihiko, Camargo Carlos A, Macias Charles G, Mansbach Jonathan M, Piedra Pedro A, Porter Stephen C, Teach Stephen J, Hasegawa Kohei

2020-Jul-03

Ophthalmology Ophthalmology

An artificial intelligent platform for live cell identification and the detection of cross-contamination.

In Annals of translational medicine

Background : About 30% of cell lines have been cellular cross-contaminated and misidentification, which can result in invalidated experimental results and unusable therapeutic products. Cell morphology under the microscope was observed routinely, and further DNA sequencing analysis was performed periodically to verify cell line identity, but the sequencing analysis was costly, time-consuming, and labor intensive. The purpose of this study was to construct a novel artificial intelligence (AI) technology for "cell face" recognition, in which can predict DNA-level identification labels only using cell images.

Methods : Seven commonly used cell lines were cultured and co-cultured in pairs (totally 8 categories) to simulated the situation of pure and cross-contaminated cells. The microscopy images were obtained and labeled of cell types by the result of short tandem repeat profiling. About 2 million patch images were used for model training and testing. AlexNet was used to demonstrate the effectiveness of convolutional neural network (CNN) in cell classification. To further improve the feasibility of detecting cross-contamination, the bilinear network for fine-grained identification was constructed. The specificity, sensitivity, and accuracy of the model were tested separately by external validation. Finally, the cell semantic segmentation was conducted by DilatedNet.

Results : The cell texture and density were the influencing factors that can be better recognized by the bilinear convolutional neural network (BCNN) comparing to AlexNet. The BCNN achieved 99.5% accuracy in identifying seven pure cell lines and 86.3% accuracy for detecting cross-contamination (mixing two of the seven cell lines). DilatedNet was applied to the semantic segment for analyzing in single-cell level and achieved an accuracy of 98.2%.

Conclusions : The deep CNN model proposed in this study has the ability to recognize small differences in cell morphology, and achieved high classification accuracy.

Wang Ruixin, Wang Dongni, Kang Dekai, Guo Xusen, Guo Chong, Dongye Meimei, Zhu Yi, Chen Chuan, Zhang Xiayin, Long Erping, Wu Xiaohang, Liu Zhenzhen, Lin Duoru, Wang Jinghui, Huang Kai, Lin Haotian

2020-Jun

Cell authentification, biomedical optical imaging, image classification, neural networks

General General

Domain Transform Network for Photoacoustic Tomography from Limited-view and Sparsely Sampled Data.

In Photoacoustics

Medical image reconstruction methods based on deep learning have recently demonstrated powerful performance in photoacoustic tomography (PAT) from limited-view and sparse data. However, because most of these methods must utilize conventional linear reconstruction methods to implement signal-to-image transformations, their performance is restricted. In this paper, we propose a novel deep learning reconstruction approach that integrates appropriate data pre-processing and training strategies. The Feature Projection Network (FPnet) presented herein is designed to learn this signal-to-image transformation through data-driven learning rather than through direct use of linear reconstruction. To further improve reconstruction results, our method integrates an image post-processing network (U-net). Experiments show that the proposed method can achieve high reconstruction quality from limited-view data with sparse measurements. When employing GPU acceleration, this method can achieve a reconstruction speed of 15 frames per second.

Tong Tong, Huang Wenhui, Wang Kun, He Zicong, Yin Lin, Yang Xin, Zhang Shuixing, Tian Jie

2020-Sep

Deep learning, Domain transformation, Medical image reconstruction, Photoacoustic tomography

oncology Oncology

A priori prediction of tumour response to neoadjuvant chemotherapy in breast cancer patients using quantitative CT and machine learning.

In Scientific reports ; h5-index 158.0

Response to Neoadjuvant chemotherapy (NAC) has demonstrated a high correlation to survival in locally advanced breast cancer (LABC) patients. An early prediction of responsiveness to NAC could facilitate treatment adjustments on an individual patient basis that would be expected to improve treatment outcomes and patient survival. This study investigated, for the first time, the efficacy of quantitative computed tomography (qCT) parametric imaging to characterize intra-tumour heterogeneity and its application in predicting tumour response to NAC in LABC patients. Textural analyses were performed on CT images acquired from 72 patients before the start of chemotherapy to determine quantitative features of intra-tumour heterogeneity. The best feature subset for response prediction was selected through a sequential feature selection with bootstrap 0.632 + area under the receiver operating characteristic (ROC) curve ([Formula: see text]) as a performance criterion. Several classifiers were evaluated for response prediction using the selected feature subset. Amongst the applied classifiers an Adaboost decision tree provided the best results with cross-validated [Formula: see text], accuracy, sensitivity and specificity of 0.89, 84%, 80% and 88%, respectively. The promising results obtained in this study demonstrate the potential of the proposed biomarkers to be used as predictors of LABC tumour response to NAC prior to the start of treatment.

Moghadas-Dastjerdi Hadi, Sha-E-Tallat Hira Rahman, Sannachi Lakshmanan, Sadeghi-Naini Ali, Czarnota Gregory J

2020-Jul-02

General General

HAPPENN is a novel tool for hemolytic activity prediction for therapeutic peptides which employs neural networks.

In Scientific reports ; h5-index 158.0

The growing prevalence of resistance to antibiotics motivates the search for new antibacterial agents. Antimicrobial peptides are a diverse class of well-studied membrane-active peptides which function as part of the innate host defence system, and form a promising avenue in antibiotic drug research. Some antimicrobial peptides exhibit toxicity against eukaryotic membranes, typically characterised by hemolytic activity assays, but currently, the understanding of what differentiates hemolytic and non-hemolytic peptides is limited. This study leverages advances in machine learning research to produce a novel artificial neural network classifier for the prediction of hemolytic activity from a peptide's primary sequence. The classifier achieves best-in-class performance, with cross-validated accuracy of [Formula: see text] and Matthews correlation coefficient of 0.71. This innovative classifier is available as a web server at https://research.timmons.eu/happenn , allowing the research community to utilise it for in silico screening of peptide drug candidates for high therapeutic efficacies.

Timmons Patrick Brendan, Hewage Chandralal M

2020-Jul-02

General General

Systems biology comprehensive analysis on breast cancer for identification of key gene modules and genes associated with TNM-based clinical stages.

In Scientific reports ; h5-index 158.0

Breast cancer (BC), as one of the leading causes of death among women, comprises several subtypes with controversial and poor prognosis. Considering the TNM (tumor, lymph node, metastasis) based classification for staging of breast cancer, it is essential to diagnose the disease at early stages. The present study aims to take advantage of the systems biology approach on genome wide gene expression profiling datasets to identify the potential biomarkers involved at stage I, stage II, stage III, and stage IV as well as in the integrated group. Three HER2-negative breast cancer microarray datasets were retrieved from the GEO database, including normal, stage I, stage II, stage III, and stage IV samples. Additionally, one dataset was also extracted to test the developed predictive models trained on the three datasets. The analysis of gene expression profiles to identify differentially expressed genes (DEGs) was performed after preprocessing and normalization of data. Then, statistically significant prioritized DEGs were used to construct protein-protein interaction networks for the stages for module analysis and biomarker identification. Furthermore, the prioritized DEGs were used to determine the involved GO enrichment and KEGG signaling pathways at various stages of the breast cancer. The recurrence survival rate analysis of the identified gene biomarkers was conducted based on Kaplan-Meier methodology. Furthermore, the identified genes were validated not only by using several classification models but also through screening the experimental literature reports on the target genes. Fourteen (21 genes), nine (17 genes), eight (10 genes), four (7 genes), and six (8 genes) gene modules (total of 53 unique genes out of 63 genes with involving those with the same connectivity degree) were identified for stage I, stage II, stage III, stage IV, and the integrated group. Moreover, SMC4, FN1, FOS, JUN, and KIF11 and RACGAP1 genes with the highest connectivity degrees were in module 1 for abovementioned stages, respectively. The biological processes, cellular components, and molecular functions were demonstrated for outcomes of GO analysis and KEGG pathway assessment. Additionally, the Kaplan-Meier analysis revealed that 33 genes were found to be significant while considering the recurrence-free survival rate as an alternative to overall survival rate. Furthermore, the machine learning calcification models show good performance on the determined biomarkers. Moreover, the literature reports have confirmed all of the identified gene biomarkers for breast cancer. According to the literature evidence, the identified hub genes are highly correlated with HER2-negative breast cancer. The 53-mRNA signature might be a potential gene set for TNM based stages as well as possible therapeutics with potentially good performance in predicting and managing recurrence-free survival rates at stages I, II, III, and IV as well as in the integrated group. Moreover, the identified genes for the TNM-based stages can also be used as mRNA profile signatures to determine the current stage of the breast cancer.

Amjad Elham, Asnaashari Solmaz, Sokouti Babak, Dastmalchi Siavoush

2020-Jul-02

General General

Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning.

In Journal of biomolecular structure & dynamics

Deep learning models are widely used in the automatic analysis of radiological images. These techniques can train the weights of networks on large datasets as well as fine tuning the weights of pre-trained networks on small datasets. Due to the small COVID-19 dataset available, the pre-trained neural networks can be used for diagnosis of coronavirus. However, these techniques applied on chest CT image is very limited till now. Hence, the main aim of this paper to use the pre-trained deep learning architectures as an automated tool to detection and diagnosis of COVID-19 in chest CT. A DenseNet201 based deep transfer learning (DTL) is proposed to classify the patients as COVID infected or not i.e. COVID-19 (+) or COVID (-). The proposed model is utilized to extract features by using its own learned weights on the ImageNet dataset along with a convolutional neural structure. Extensive experiments are performed to evaluate the performance of the propose DTL model on COVID-19 chest CT scan images. Comparative analyses reveal that the proposed DTL based COVID-19 classification model outperforms the competitive approaches.Communicated by Ramaswamy H. Sarma.

Jaiswal Aayush, Gianchandani Neha, Singh Dilbag, Kumar Vijay, Kaur Manjit

2020-Jul-03

COVID-19, classification, deep learning, deep transfer learning

General General

Adaptive boundary control of a vibrating cantilever nanobeam considering small scale effects.

In ISA transactions

This paper presents vibration control analysis for a cantilever nanobeam system. The dynamics of the system is obtained by the non-local elastic relationship which characterizes the small scale effects. The boundary conditions and governing equation are respectively expressed by several ordinary differential equations (ODE) and a partial differential equation (PDE) with the help of the Hamilton's principle. Model-based control and adaptive control are both designed at the free end to regulate the vibration in the control section. By employing the Lyapunov stability approach, the system state can be proven to be substantiated to converge to zero's small neighbourhood with appropriate parameters. Simulation results illustrate that the designed control is feasible for the nanobeam system.

Yue Xinling, Song Yuhua, Zou Jianxiao, He We

2020-Jun-16

Adaptive control, Cantilever nanobeam, Nonlocal elastic theory, Partial differential equation, Vibration control

Surgery Surgery

Using Machine Learning to Estimate Unobserved COVID-19 Infections in North America.

In The Journal of bone and joint surgery. American volume

BACKGROUND : The detection of coronavirus disease 2019 (COVID-19) cases remains a huge challenge. As of April 22, 2020, the COVID-19 pandemic continues to take its toll, with >2.6 million confirmed infections and >183,000 deaths. Dire projections are surfacing almost every day, and policymakers worldwide are using projections for critical decisions. Given this background, we modeled unobserved infections to examine the extent to which we might be grossly underestimating COVID-19 infections in North America.

METHODS : We developed a machine-learning model to uncover hidden patterns based on reported cases and to predict potential infections. First, our model relied on dimensionality reduction to identify parameters that were key to uncovering hidden patterns. Next, our predictive analysis used an unbiased hierarchical Bayesian estimator approach to infer past infections from current fatalities.

RESULTS : Our analysis indicates that, when we assumed a 13-day lag time from infection to death, the United States, as of April 22, 2020, likely had at least 1.3 million undetected infections. With a longer lag time-for example, 23 days-there could have been at least 1.7 million undetected infections. Given these assumptions, the number of undetected infections in Canada could have ranged from 60,000 to 80,000. Duarte's elegant unbiased estimator approach suggested that, as of April 22, 2020, the United States had up to >1.6 million undetected infections and Canada had at least 60,000 to 86,000 undetected infections. However, the Johns Hopkins University Center for Systems Science and Engineering data feed on April 22, 2020, reported only 840,476 and 41,650 confirmed cases for the United States and Canada, respectively.

CONCLUSIONS : We have identified 2 key findings: (1) as of April 22, 2020, the United States may have had 1.5 to 2.029 times the number of reported infections and Canada may have had 1.44 to 2.06 times the number of reported infections and (2) even if we assume that the fatality and growth rates in the unobservable population (undetected infections) are similar to those in the observable population (confirmed infections), the number of undetected infections may be within ranges similar to those described above. In summary, 2 different approaches indicated similar ranges of undetected infections in North America.

LEVEL OF EVIDENCE : Prognostic Level V. See Instructions for Authors for a complete description of levels of evidence.

Vaid Shashank, Cakan Caglar, Bhandari Mohit

2020-Jul-01

General General

Machine learning for automatic identification of thoracoabdominal asynchrony in children.

In Pediatric research ; h5-index 47.0

BACKGROUND : The current methods for assessment of thoracoabdominal asynchrony (TAA) require offline analysis on the part of physicians (respiratory inductance plethysmography (RIP)) or require experts for interpretation of the data (sleep apnea detection).

METHODS : To assess synchrony between the thorax and abdomen, the movements of the two compartments during quiet breathing were measured using pneuRIP. Fifty-one recordings were obtained: 20 were used to train a machine-learning (ML) model with elastic-net regularization, and 31 were used to test the model's performance. Two feature sets were explored: (1) phase difference (ɸ) between the thoracic and abdominal signals and (2) inverse cumulative percentage (ICP), which is an alternate measure of data distribution. To compute accuracy of training, the model outcomes were compared with five experts' assessments.

RESULTS : Accuracies of 61.3% and 90.3% were obtained using ɸ and ICP features, respectively. The inter-rater reliability (i.r.r.) of the assessments of experts was 0.402 and 0.684 when they used ɸ and ICP to identify TAA, respectively.

CONCLUSIONS : With this pilot study, we show the efficacy of the ICP feature and ML in developing an accurate automated approach to identifying TAA that reduces time and effort for diagnosis. ICP also helped improve consensus among experts.

IMPACT : Our article presents an automated approach to identifying thoracic abdominal asynchrony using machine learning and the pneuRIP device.It also shows how a modified statistical measure of cumulative frequency can be used to visualize the progression of the pulmonary functionality along time.The pulmonary testing method we developed gives patients and doctors a noninvasive and easy to administer and diagnose approach.It can be administered remotely, and alerts can be transmitted to the physician.Further, the test can also be used to monitor and assess pulmonary function continuously for prolonged periods, if needed.

Ratnagiri Madhavi V, Ryan Lauren, Strang Abigail, Heinle Robert, Rahman Tariq, Shaffer Thomas H

2020-Jul-03

General General

Advances in the computational understanding of mental illness.

In Neuropsychopharmacology : official publication of the American College of Neuropsychopharmacology

Computational psychiatry is a rapidly growing field attempting to translate advances in computational neuroscience and machine learning into improved outcomes for patients suffering from mental illness. It encompasses both data-driven and theory-driven efforts. Here, recent advances in theory-driven work are reviewed. We argue that the brain is a computational organ. As such, an understanding of the illnesses arising from it will require a computational framework. The review divides work up into three theoretical approaches that have deep mathematical connections: dynamical systems, Bayesian inference and reinforcement learning. We discuss both general and specific challenges for the field, and suggest ways forward.

Huys Quentin J M, Browning Michael, Paulus Martin, Frank Michael J

2020-Jul-03

Surgery Surgery

Analyzing Risk Factors for Enterostomy Infection and Neuropsychology of Patients by Computer Information Data Regression under Endoscopic Image Guidance.

In Neuroscience letters

This study was aimed to investigate the possible risk factors of stoma prolapsing infection and neuropsychological problems after colostomy based on the artificial intelligence DiracNet network. 380 patients who underwent colostomy were selected as the research objects. The clinical data of these patients were analyzed, and postoperative follow-ups were performed. The statistics on gender, age, stoma type, stoma location, stoma size, previous medical history, and postoperative chemotherapy of patients were counted. The Chi-square test was utilized to analyze the risk factors associated with stoma prolapsing infection. Computer linear regression analysis was utilized to analyze the risk factors that caused stoma prolapsing infection and the neuropsychological problems of patients. The artificial intelligence DiracNet network was used to extract and analyze the features of patients' intestinal stoma prolapsing infection images. Results: Twenty-six patients had stoma prolapsing infection; the Chi-square test showed that the age, stoma type, stoma size, and stoma prolapsing infection were strongly correlated (P < 0.05), while the gender, stoma location, previous medical history, and postoperative chemotherapy hardly caused prolapsing infection (P > 0.05). The results of the computer linear regression analysis showed that the age, stoma type, and stoma size were three independent risk factors that increased the rate of stoma prolapsing infection (P < 0.05). Patients with stoma prolapsing infection were easy to have neuropsychological problems; the Pittsburgh Sleep Quality Index (PSQI), Hamilton Anxiety Scale (HAMA), and Hamilton Depression Scale (HAMD) scores of patients with stoma prolapsing infection were statistically significant compared with the normal group (P < 0.05). In conclusion, the artificial intelligence DiracNet network could obtain a clear image of the patient's intestinal stoma prolapsing infection and clearly shows the fluid leakage and ulceration of the infected part of the patient's intestinal stoma.

Li Jing, Liu Xiaoyu, Chen Jun

2020-Jun-30

artificial intelligence DiracNet network, computer regression analysis, neuropsychology, stoma prolapsing infection

General General

Bioinformatics analysis of the genes involved in the extension of proCriteriastate cancer to adjacent lymph nodes by supervised and unsupervised machine learning methods: The role of SPAG1 and PLEKHF2.

In Genomics

The present study aimed to identify the genes associated with the involvement of adjunct lymph nodes of patients with prostate cancer (PCa) and to provide valuable information for the identification of potential diagnostic biomarkers and pathological genes in PCa metastasis. The most important candidate genes were identified through several machine learning approaches including K-means clustering, neural network, Naïve Bayesian classifications and PCA with or without downsampling. In total, 21 genes associated with lymph nodes involvement were identified. Among them, nine genes have been identified in metastatic prostate cancer, six have been found in the other metastatic cancers and four in other local cancers. The amplification of the candidate genes was evaluated in the other PCa datasets. Besides, we identified a validated set of genes involved in the PCa metastasis. The amplification of SPAG1 and PLEKHF2 genes were associated with decreased survival in patients with PCa.

Shamsara Elham, Shamsara Jamal

2020-Jun-30

Gene expression analysis, Machine learning, Metastasis, Prostate cancer

Public Health Public Health

Coronavirus disease 2019 (COVID-19): an evidence map of medical literature.

In BMC medical research methodology

BACKGROUND : Since the beginning of the COVID-19 outbreak in December 2019, a substantial body of COVID-19 medical literature has been generated. As of June 2020, gaps and longitudinal trends in the COVID-19 medical literature remain unidentified, despite potential benefits for research prioritisation and policy setting in both the COVID-19 pandemic and future large-scale public health crises.

METHODS : In this paper, we searched PubMed and Embase for medical literature on COVID-19 between 1 January and 24 March 2020. We characterised the growth of the early COVID-19 medical literature using evidence maps and bibliometric analyses to elicit cross-sectional and longitudinal trends and systematically identify gaps.

RESULTS : The early COVID-19 medical literature originated primarily from Asia and focused mainly on clinical features and diagnosis of the disease. Many areas of potential research remain underexplored, such as mental health, the use of novel technologies and artificial intelligence, pathophysiology of COVID-19 within different body systems, and indirect effects of COVID-19 on the care of non-COVID-19 patients. Few articles involved research collaboration at the international level (24.7%). The median submission-to-publication duration was 8 days (interquartile range: 4-16).

CONCLUSIONS : Although in its early phase, COVID-19 research has generated a large volume of publications. However, there are still knowledge gaps yet to be filled and areas for improvement for the global research community. Our analysis of early COVID-19 research may be valuable in informing research prioritisation and policy planning both in the current COVID-19 pandemic and similar global health crises.

Liu Nan, Chee Marcel Lucas, Niu Chenglin, Pek Pin Pin, Siddiqui Fahad Javaid, Ansah John Pastor, Matchar David Bruce, Lam Sean Shao Wei, Abdullah Hairil Rizal, Chan Angelique, Malhotra Rahul, Graves Nicholas, Koh Mariko Siyue, Yoon Sungwon, Ho Andrew Fu Wah, Ting Daniel Shu Wei, Low Jenny Guek Hong, Ong Marcus Eng Hock

2020-Jul-02

COVID-19, Coronavirus, Evidence gap map, Review, SARS-CoV-2

General General

A deep learning approach to characterize 2019 coronavirus disease (COVID-19) pneumonia in chest CT images.

In European radiology ; h5-index 62.0

OBJECTIVES : To utilize a deep learning model for automatic detection of abnormalities in chest CT images from COVID-19 patients and compare its quantitative determination performance with radiological residents.

METHODS : A deep learning algorithm consisted of lesion detection, segmentation, and location was trained and validated in 14,435 participants with chest CT images and definite pathogen diagnosis. The algorithm was tested in a non-overlapping dataset of 96 confirmed COVID-19 patients in three hospitals across China during the outbreak. Quantitative detection performance of the model was compared with three radiological residents with two experienced radiologists' reading reports as reference standard by assessing the accuracy, sensitivity, specificity, and F1 score.

RESULTS : Of 96 patients, 88 had pneumonia lesions on CT images and 8 had no abnormities on CT images. For per-patient basis, the algorithm showed superior sensitivity of 1.00 (95% confidence interval (CI) 0.95, 1.00) and F1 score of 0.97 in detecting lesions from CT images of COVID-19 pneumonia patients. While for per-lung lobe basis, the algorithm achieved a sensitivity of 0.96 (95% CI 0.94, 0.98) and a slightly inferior F1 score of 0.86. The median volume of lesions calculated by algorithm was 40.10 cm3. An average running speed of 20.3 s ± 5.8 per case demonstrated the algorithm was much faster than the residents in assessing CT images (all p < 0.017). The deep learning algorithm can also assist radiologists make quicker diagnosis (all p < 0.0001) with superior diagnostic performance.

CONCLUSIONS : The algorithm showed excellent performance in detecting COVID-19 pneumonia on chest CT images compared with resident radiologists.

KEY POINTS : • The higher sensitivity of deep learning model in detecting COVID-19 pneumonia were found compared with radiological residents on a per-lobe and per-patient basis. • The deep learning model improves diagnosis efficiency by shortening processing time. • The deep learning model can automatically calculate the volume of the lesions and whole lung.

Ni Qianqian, Sun Zhi Yuan, Qi Li, Chen Wen, Yang Yi, Wang Li, Zhang Xinyuan, Yang Liu, Fang Yi, Xing Zijian, Zhou Zhen, Yu Yizhou, Lu Guang Ming, Zhang Long Jiang

2020-Jul-02

COVID-19, Deep learning, Diagnosis, Multidetector computed tomography, Pneumonia

General General

Arrhythmic Gut Microbiome Signatures Predict Risk of Type 2 Diabetes.

In Cell host & microbe ; h5-index 102.0

Lifestyle, obesity, and the gut microbiome are important risk factors for metabolic disorders. We demonstrate in 1,976 subjects of a German population cohort (KORA) that specific microbiota members show 24-h oscillations in their relative abundance and identified 13 taxa with disrupted rhythmicity in type 2 diabetes (T2D). Cross-validated prediction models based on this signature similarly classified T2D. In an independent cohort (FoCus), disruption of microbial oscillation and the model for T2D classification was confirmed in 1,363 subjects. This arrhythmic risk signature was able to predict T2D in 699 KORA subjects 5 years after initial sampling, being most effective in combination with BMI. Shotgun metagenomic analysis functionally linked 26 metabolic pathways to the diurnal oscillation of gut bacteria. Thus, a cohort-specific risk pattern of arrhythmic taxa enables classification and prediction of T2D, suggesting a functional link between circadian rhythms and the microbiome in metabolic diseases.

Reitmeier Sandra, Kiessling Silke, Clavel Thomas, List Markus, Almeida Eduardo L, Ghosh Tarini S, Neuhaus Klaus, Grallert Harald, Linseisen Jakob, Skurk Thomas, Brandl Beate, Breuninger Taylor A, Troll Martina, Rathmann Wolfgang, Linkohr Birgit, Hauner Hans, Laudes Matthias, Franke Andre, Le Roy Caroline I, Bell Jordana T, Spector Tim, Baumbach Jan, O’Toole Paul W, Peters Annette, Haller Dirk

2020-Jun-29

amplicon sequencing, circadian rhythms, diurnal oscillations, human intestinal microbiota, machine learning, metagenomics, obesity, population-based cohorts, prediction, type 2 diabetes

General General

Classification of the COVID-19 infected patients using DenseNet201 based deep transfer learning.

In Journal of biomolecular structure & dynamics

Deep learning models are widely used in the automatic analysis of radiological images. These techniques can train the weights of networks on large datasets as well as fine tuning the weights of pre-trained networks on small datasets. Due to the small COVID-19 dataset available, the pre-trained neural networks can be used for diagnosis of coronavirus. However, these techniques applied on chest CT image is very limited till now. Hence, the main aim of this paper to use the pre-trained deep learning architectures as an automated tool to detection and diagnosis of COVID-19 in chest CT. A DenseNet201 based deep transfer learning (DTL) is proposed to classify the patients as COVID infected or not i.e. COVID-19 (+) or COVID (-). The proposed model is utilized to extract features by using its own learned weights on the ImageNet dataset along with a convolutional neural structure. Extensive experiments are performed to evaluate the performance of the propose DTL model on COVID-19 chest CT scan images. Comparative analyses reveal that the proposed DTL based COVID-19 classification model outperforms the competitive approaches.Communicated by Ramaswamy H. Sarma.

Jaiswal Aayush, Gianchandani Neha, Singh Dilbag, Kumar Vijay, Kaur Manjit

2020-Jul-03

COVID-19, classification, deep learning, deep transfer learning

General General

Understanding the relationship between patient language and outcomes in internet-enabled cognitive behavioural therapy: A deep learning approach to automatic coding of session transcripts.

In Psychotherapy research : journal of the Society for Psychotherapy Research

Objective: Understanding patient responses to psychotherapy is important in developing effective interventions. However, coding patient language is a resource-intensive exercise and difficult to perform at scale. Our aim was to develop a deep learning model to automatically identify patient utterances during text-based internet-enabled Cognitive Behavioural Therapy and to determine the association between utterances and clinical outcomes. Method: Using 340 manually annotated transcripts we trained a deep learning model to categorize patient utterances into one or more of five categories. The model was used to automatically code patient utterances from our entire data set of transcripts (∼34,000 patients), and logistic regression analyses used to determine the association between both reliable improvement and engagement, and patient responses. Results: Our model reached human-level agreement on three of the five patient categories. Regression analyses revealed that increased counter change-talk (movement away from change) was associated with lower odds of both reliable improvement and engagement, while increased change-talk (movement towards change or self-exploration) was associated with increased odds of improvement and engagement. Conclusions: Deep learning provides an effective means of automatically coding patient utterances at scale. This approach enables the development of a data-driven understanding of the relationship between therapist and patient during therapy.

Ewbank M P, Cummins R, Tablan V, Catarino A, Buchholz S, Blackwell A D

2020-Jul-03

cognitive behaviour therapy, outcome research, technology in psychotherapy research & training

Surgery Surgery

Using Machine Learning to Estimate Unobserved COVID-19 Infections in North America.

In The Journal of bone and joint surgery. American volume

BACKGROUND : The detection of coronavirus disease 2019 (COVID-19) cases remains a huge challenge. As of April 22, 2020, the COVID-19 pandemic continues to take its toll, with >2.6 million confirmed infections and >183,000 deaths. Dire projections are surfacing almost every day, and policymakers worldwide are using projections for critical decisions. Given this background, we modeled unobserved infections to examine the extent to which we might be grossly underestimating COVID-19 infections in North America.

METHODS : We developed a machine-learning model to uncover hidden patterns based on reported cases and to predict potential infections. First, our model relied on dimensionality reduction to identify parameters that were key to uncovering hidden patterns. Next, our predictive analysis used an unbiased hierarchical Bayesian estimator approach to infer past infections from current fatalities.

RESULTS : Our analysis indicates that, when we assumed a 13-day lag time from infection to death, the United States, as of April 22, 2020, likely had at least 1.3 million undetected infections. With a longer lag time-for example, 23 days-there could have been at least 1.7 million undetected infections. Given these assumptions, the number of undetected infections in Canada could have ranged from 60,000 to 80,000. Duarte's elegant unbiased estimator approach suggested that, as of April 22, 2020, the United States had up to >1.6 million undetected infections and Canada had at least 60,000 to 86,000 undetected infections. However, the Johns Hopkins University Center for Systems Science and Engineering data feed on April 22, 2020, reported only 840,476 and 41,650 confirmed cases for the United States and Canada, respectively.

CONCLUSIONS : We have identified 2 key findings: (1) as of April 22, 2020, the United States may have had 1.5 to 2.029 times the number of reported infections and Canada may have had 1.44 to 2.06 times the number of reported infections and (2) even if we assume that the fatality and growth rates in the unobservable population (undetected infections) are similar to those in the observable population (confirmed infections), the number of undetected infections may be within ranges similar to those described above. In summary, 2 different approaches indicated similar ranges of undetected infections in North America.

LEVEL OF EVIDENCE : Prognostic Level V. See Instructions for Authors for a complete description of levels of evidence.

Vaid Shashank, Cakan Caglar, Bhandari Mohit

2020-Jul-01

General General

Machine learning methods in organ transplantation.

In Current opinion in organ transplantation ; h5-index 32.0

PURPOSE OF REVIEW : Machine learning techniques play an important role in organ transplantation. Analysing the main tasks for which they are being applied, together with the advantages and disadvantages of their use, can be of crucial interest for clinical practitioners.

RECENT FINDINGS : In the last 10 years, there has been an explosion of interest in the application of machine-learning techniques to organ transplantation. Several approaches have been proposed in the literature aiming to find universal models by considering multicenter cohorts or from different countries. Moreover, recently, deep learning has also been applied demonstrating a notable ability when dealing with a vast amount of information.

SUMMARY : Organ transplantation can benefit from machine learning in such a way to improve the current procedures for donor--recipient matching or to improve standard scores. However, a correct preprocessing is needed to provide consistent and high quality databases for machine-learning algorithms, aiming to robust and fair approaches to support expert decision-making systems.

Guijo-Rubio David, Gutiérrez Pedro Antonio, Hervás-Martínez César

2020-Jun-30

Surgery Surgery

Machine Learning Applied to Registry Data: Development of a Patient-Specific Prediction Model for Blood Transfusion Requirements During Craniofacial Surgery Using the Pediatric Craniofacial Perioperative Registry Dataset.

In Anesthesia and analgesia

BACKGROUND : Craniosynostosis is the premature fusion of ≥1 cranial sutures and often requires surgical intervention. Surgery may involve extensive osteotomies, which can lead to substantial blood loss. Currently, there are no consensus recommendations for guiding blood conservation or transfusion in this patient population. The aim of this study is to develop a machine-learning model to predict blood product transfusion requirements for individual pediatric patients undergoing craniofacial surgery.

METHODS : Using data from 2143 patients in the Pediatric Craniofacial Surgery Perioperative Registry, we assessed 6 machine-learning classification and regression models based on random forest, adaptive boosting (AdaBoost), neural network, gradient boosting machine (GBM), support vector machine, and elastic net methods with inputs from 22 demographic and preoperative features. We developed classification models to predict an individual's overall need for transfusion and regression models to predict the number of blood product units to be ordered preoperatively. The study is reported according to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) checklist for prediction model development.

RESULTS : The GBM performed best in both domains, with an area under receiver operating characteristic curve of 0.87 ± 0.03 (95% confidence interval) and F-score of 0.91 ± 0.04 for classification, and a mean squared error of 1.15 ± 0.12, R-squared (R) of 0.73 ± 0.02, and root mean squared error of 1.05 ± 0.06 for regression. GBM feature ranking determined that the following variables held the most information for prediction: platelet count, weight, preoperative hematocrit, surgical volume per institution, age, and preoperative hemoglobin. We then produced a calculator to show the number of units of blood that should be ordered preoperatively for an individual patient.

CONCLUSIONS : Anesthesiologists and surgeons can use this continually evolving predictive model to improve clinical care of patients presenting for craniosynostosis surgery.

Jalali Ali, Lonsdale Hannah, Zamora Lillian V, Ahumada Luis, Nguyen Anh Thy H, Rehman Mohamed, Fackler James, Stricker Paul A, Fernandez Allison M

2020-Jun-30

General General

Machine learning for the prediction of antimicrobial stewardship intervention in hospitalized patients receiving broad-spectrum agents.

In Infection control and hospital epidemiology ; h5-index 48.0

OBJECTIVE : A significant proportion of inpatient antimicrobial prescriptions are inappropriate. Post-prescription review with feedback has been shown to be an effective means of reducing inappropriate antimicrobial use. However, implementation is resource intensive. Our aim was to evaluate the performance of traditional statistical models and machine-learning models designed to predict which patients receiving broad-spectrum antibiotics require a stewardship intervention.

METHODS : We performed a single-center retrospective cohort study of inpatients who received an antimicrobial tracked by the antimicrobial stewardship program. Data were extracted from the electronic medical record and were used to develop logistic regression and boosted-tree models to predict whether antibiotic therapy required stewardship intervention on any given day as compared to the criterion standard of note left by the antimicrobial stewardship team in the patient's chart. We measured the performance of these models using area under the receiver operating characteristic curves (AUROC), and we evaluated it using a hold-out validation cohort.

RESULTS : Both the logistic regression and boosted-tree models demonstrated fair discriminatory power with AUROCs of 0.73 (95% confidence interval [CI], 0.69-0.77) and 0.75 (95% CI, 0.72-0.79), respectively (P = .07). Both models demonstrated good calibration. The number of patients that would need to be reviewed to identify 1 patient who required stewardship intervention was high for both models (41.7-45.5 for models tuned to a sensitivity of 85%).

CONCLUSIONS : Complex models can be developed to predict which patients require a stewardship intervention. However, further work is required to develop models with adequate discriminatory power to be applicable to real-world antimicrobial stewardship practice.

Bystritsky Rachel J, Beltran Alex, Young Albert T, Wong Andrew, Hu Xiao, Doernberg Sarah B

2020-Jun-18

Radiology Radiology

Performance of deep learning object detection technology in the detection and diagnosis of maxillary sinus lesions on panoramic radiographs.

In Dento maxillo facial radiology

OBJECTIVE : The first aim of this study was to determine the performance of a deep learning object detection technique in the detection of maxillary sinuses on panoramic radiographs. The second aim was to clarify the performance in the classification of maxillary sinus lesions compared with healthy maxillary sinuses.

METHODS : The imaging data for healthy maxillary sinuses (587 sinuses, Class 0), inflamed maxillary sinuses (416 sinuses, Class 1), cysts of maxillary sinus regions (171 sinuses, Class 2) were assigned to training, testing 1, and testing two datasets.Alearning process of 1000 epochs with the training images and labelswas performed using DetectNet, and a learning model was created. The testing 1 and testing two images were applied to the model, and the detection sensitivities and the false-positive rates per image were calculated. The accuracies, sensitivities and specificities were determined for distinguishing the inflammation group (Class 1) and cyst group (Class 2) with respect to the healthy group (Class 0).

RESULTS : Detection sensitivities of healthy (Class 0) and inflamed (Class 1) maxillary sinuses were 100% for both testing 1 and testing two datasets, whereas they were 98 and 89% for cysts of the maxillary sinus regions (Class 2). False-positive rates per image were nearly 0.00. Accuracies, sensitivities and specificities for diagnosis maxillary sinusitis were 90-91%, 88-85%, and 91-96%, respectively; for cysts of the maxillary sinus regions, these values were 97-100%, 80-100%, and 100-100%, respectively.

CONCLUSION : Deep learning could reliably detect the maxillary sinuses and identify maxillary sinusitis and cysts of the maxillary sinus regions.

ADVANCES IN KNOWLEDGE : This study using a deep leaning object detection technique indicated that the detection sensitivities of maxillary sinuses were high and the performance of maxillary sinus lesion identification was ≧80%.In particular, performance of sinusitis identification was ≧90%.

Kuwana Ryosuke, Ariji Yoshiko, Fukuda Motoki, Kise Yoshitaka, Nozawa Michihito, Kuwada Chiaki, Muramatsu Chisako, Katsumata Akitoshi, Fujita Hiroshi, Ariji Eiichiro

2020-Jul-01

artificial intelligence, deep learning, maxillary sinus, object detection, panoramic radiography

Cardiology Cardiology

Advances in accelerometry for cardiovascular patients: a systematic review with practical recommendations.

In ESC heart failure

AIMS : Accelerometers are becoming increasingly commonplace for assessing physical activity; however, their use in patients with cardiovascular diseases is relatively substandard. We aimed to systematically review the methods used for collecting and processing accelerometer data in cardiology, using the example of heart failure, and to provide practical recommendations on how to improve objective physical activity assessment in patients with cardiovascular diseases by using accelerometers.

METHODS AND RESULTS : Four electronic databases were searched up to September 2019 for observational, interventional, and validation studies using accelerometers to assess physical activity in patients with heart failure. Study and population characteristics, details of accelerometry data collection and processing, and description of physical activity metrics were extracted from the eligible studies and synthesized. To assess the quality and completeness of accelerometer reporting, the studies were scored using 12 items on data collection and processing, such as the placement of accelerometer, days of data collected, and criteria for non-wear of the accelerometer. In 60 eligible studies with 3500 patients (of those, 536 were heart failure with preserved ejection fraction patients), a wide variety of accelerometer brands (n = 27) and models (n = 46) were used, with Actigraph being the most frequent (n = 12), followed by Fitbit (n = 5). The accelerometer was usually worn on the hip (n = 32), and the most prevalent wear period was 7 days (n = 22). The median wear time required for a valid day was 600 min, and between two and five valid days was required for a patient to be included in the analysis. The most common measures of physical activity were steps (n = 20), activity counts (n = 15), and time spent in moderate-to-vigorous physical activity (n = 14). Only three studies validated accelerometers in a heart failure population, showing that their accuracy deteriorates at slower speeds. Studies failed to report between one and six (median 4) of the 12 scored items, with non-wear time criteria and valid day definition being the most underreported items.

CONCLUSIONS : The use of accelerometers in cardiology lacks consistency and reporting on data collection, and processing methods need to be improved. Furthermore, calculating metrics based on raw acceleration and machine learning techniques is lacking, opening the opportunity for future exploration. Therefore, we encourage researchers and clinicians to improve the quality and transparency of data collection and processing by following our proposed practical recommendations for using accelerometers in patients with cardiovascular diseases, which are outlined in the article.

Vetrovsky Tomas, Clark Cain C T, Bisi Maria Cristina, Siranec Michal, Linhart Ales, Tufano James J, Duncan Michael J, Belohlavek Jan

2020-Jul-03

Counts, Cut points, Heart failure, Physical activity, Raw acceleration, Steps

Pathology Pathology

Visual histological assessment of morphological features reflects the underlying molecular profile in invasive breast cancer: a morpho-molecular study.

In Histopathology ; h5-index 43.0

BACKGROUND : Tumour genotype and phenotype are related and can predict outcome. In this study, we hypothesised that the visual assessment of breast cancer (BC) morphological features can provide valuable insight into underlying molecular profiles.

METHODS : The Cancer Genome Atlas (TCGA) BC cohort was used (n=743) and morphological features including Nottingham grade and its components and nucleolar prominence were assessed utilising whole slide images (WSIs). Two independent scores were assigned, and discordant cases were utilised to represent cases with intermediate morphological features. Differentially expressed genes (DEGs) were identified for each feature, compared among concordant/discordant cases and tested for specific pathways.

RESULTS : Concordant grading was observed in 467/743 (63%) of cases. Among concordant case groups, 8 common DEGs (UGT8, DDC, RGR, RLBP1, SPRR1B, CXorf49B, PSAPL1, and SPRR2G) were associated with overall tumour grade and its components. These genes are related mainly to cellular proliferation, differentiation and metabolism. The number of DEGs in cases with discordant grading was larger than those identified in concordant cases. The largest number of DEGs was observed in discordant grade 1:3 cases (n=1185). DEGs were identified for each discordant component. Some DEGs were uniquely associated with well-defined specific morphological features, whereas expression/co-expression of other genes was identified across multiple features and underlined intermediate morphological features.

CONCLUSION : Morphological features are likely related to distinct underlying molecular profiles that drive both morphology and behaviour. This study provides further evidence to support the use of image-based analysis of WSIs, including artificial intelligence algorithms, to predict tumour molecular profiles and outcome.

Rakha Emad A, Alsaleem Mansour, ElSharawy Khloud A, Toss Michael S, Raafat Sara, Mihai Raluca, Minhas Fayyaz A, Green Andrew R, Rajpoot Nasir, Dalton Les W, Mongan Nigel P

2020-Jul-02

Breast, digital pathology, grade, molecular profiles, morphology

Public Health Public Health

Directed acyclic graphs and causal thinking in clinical risk prediction modeling.

In BMC medical research methodology

BACKGROUND : In epidemiology, causal inference and prediction modeling methodologies have been historically distinct. Directed Acyclic Graphs (DAGs) are used to model a priori causal assumptions and inform variable selection strategies for causal questions. Although tools originally designed for prediction are finding applications in causal inference, the counterpart has remained largely unexplored. The aim of this theoretical and simulation-based study is to assess the potential benefit of using DAGs in clinical risk prediction modeling.

METHODS : We explore how incorporating knowledge about the underlying causal structure can provide insights about the transportability of diagnostic clinical risk prediction models to different settings. We further probe whether causal knowledge can be used to improve predictor selection in clinical risk prediction models.

RESULTS : A single-predictor model in the causal direction is likely to have better transportability than one in the anticausal direction in some scenarios. We empirically show that the Markov Blanket, the set of variables including the parents, children, and parents of the children of the outcome node in a DAG, is the optimal set of predictors for that outcome.

CONCLUSIONS : Our findings provide a theoretical basis for the intuition that a diagnostic clinical risk prediction model including causes as predictors is likely to be more transportable. Furthermore, using DAGs to identify Markov Blanket variables may be a useful, efficient strategy to select predictors in clinical risk prediction models if strong knowledge of the underlying causal structure exists or can be learned.

Piccininni Marco, Konigorski Stefan, Rohmann Jessica L, Kurth Tobias

2020-Jul-02

Causality, Clinical risk prediction, Directed acyclic graph, Markov blanket, Prediction models, Predictor selection, Transportability

Public Health Public Health

COVID-19: A master stroke of Nature.

In AIMS public health

This article presents the status of countries affected by COVID-19 (as of mid-May 2020) and their preparedness to combat the after-effects of the pandemic. The report also provides an analysis of how human behavior may have triggered such a global pandemic and why humans need to consider living sustainably to make our future world livable for all. COVID-19 originated in the city of Wuhan, China in December 2019. As of mid-May, it has spread to 213 countries and territories worldwide. The World Health Organization has declared COVID-19 a global pandemic, with a death toll of over 300,000 to date. The U.S. is currently the most impacted country. Collaborative efforts of scientists and politicians across the world will be needed to better plan and utilize global health resources to combat this global pandemic. Machine learning-based prediction models could also help by identifying potential COVID-19-prone areas and individuals. The cause of the emergence of COVID-19 is still a matter of research; however, one consistent theme is humanity's unsustainable behavior. By sustainably interacting with nature, humans may have avoided this pandemic. If unsustainable human practices are not controlled through education, awareness, behavioral change, as well as sustainable policy creation and enforcement, there could be several such pandemics in our future.

Singh Sushant K

2020

COVID-19, Nature, coronavirus, pandemic, public health, sustainability

General General

Machine learning in prediction of genetic risk of nonsyndromic oral clefts in the Brazilian population.

In Clinical oral investigations ; h5-index 46.0

OBJECTIVES : Genetic variants in multiple genes and loci have been associated with the risk of nonsyndromic cleft lip with or without cleft palate (NSCL ± P). However, the estimation of risk remains challenge, because most of these variants are population-specific rendering the identification of the underlying genetic risk difficult. Herein we examined the use of machine learning network in previously reported single nucleotide polymorphisms (SNPs) to predict risk of NSCL ± P in the Brazilian population.

MATERIALS AND METHODS : Random forest and neural network methods were applied in 72 SNPs in a case-control sample composed by 722 NSCL ± P and 866 controls for discrimination of NSCL ± P risk. SNP-SNP interactions and functional annotation biological processes associated with the identified NSCL ± P risk genes were verified.

RESULTS : Supervised random forest decision trees revealed high scores of importance for the SNPs rs11717284 and rs1875735 in FGF12, rs41268753 in GRHL3, rs2236225 in MTHFD1, rs2274976 in MTHFR, rs2235371 and rs642961 in IRF6, rs17085106 in RHPN2, rs28372960 in TCOF1, rs7078160 in VAX1, rs10762573 and rs2131960 in VCL, and rs227731 in 17q22, with an accuracy of 99% and an error rate of approximately 3% to predict the risk of NSCL ± P. Those same 13 SNPs were considered the most important for the neural network to effectively predict NSCL ± P risk, with an overall accuracy of 94%. Multivariate regression model revealed significant interactions among all SNPs, with an exception of those in FGF12 and MTHFD1. The most significantly biological processes for selected genes were those involved in tissue and epithelium development; neural tube closure; and metabolism of methionine, folate, and homocysteine.

CONCLUSIONS : Our results provide novel clues for genetic mechanism studies of NSCL ± P and point out for a machine learning model composed by 13 SNPs that is capable of predicting NSCL ± P risk.

CLINICAL RELEVANCE : Although validation is necessary, this genetic panel can be useful in the near future to assist in NSCL ± P genetic counseling.

Machado Renato Assis, de Oliveira Silva Carolina, Martelli-Junior Hercílio, das Neves Lucimara Teixeira, Coletta Ricardo D

2020-Jul-02

Brazilian population, Genetic counseling, Machine learning, Nonsyndromic oral cleft, Single nucleotide polymorphism

Cardiology Cardiology

From CT to artificial intelligence for complex assessment of plaque-associated risk.

In The international journal of cardiovascular imaging

The recent technological developments in the field of cardiac imaging have established coronary computed tomography angiography (CCTA) as a first-line diagnostic tool in patients with suspected coronary artery disease (CAD). CCTA offers robust information on the overall coronary circulation and luminal stenosis, also providing the ability to assess the composition, morphology, and vulnerability of atherosclerotic plaques. In addition, the perivascular adipose tissue (PVAT) has recently emerged as a marker of increased cardiovascular risk. The addition of PVAT quantification to standard CCTA imaging may provide the ability to extract information on local inflammation, for an individualized approach in coronary risk stratification. The development of image post-processing tools over the past several years allowed CCTA to provide a significant amount of data that can be incorporated into machine learning (ML) applications. ML algorithms that use radiomic features extracted from CCTA are still at an early stage. However, the recent development of artificial intelligence will probably bring major changes in the way we integrate clinical, biological, and imaging information, for a complex risk stratification and individualized therapeutic decision making in patients with CAD. This review aims to present the current evidence on the complex role of CCTA in the detection and quantification of vulnerable plaques and the associated coronary inflammation, also describing the most recent developments in the radiomics-based machine learning approach for complex assessment of plaque-associated risk.

Opincariu Diana, Benedek Theodora, Chițu Monica, Raț Nora, Benedek Imre

2020-Jul-02

CCTA, Machine learning, Radiomics, Risk stratification, Vulnerable plaques

General General

A deep learning approach to characterize 2019 coronavirus disease (COVID-19) pneumonia in chest CT images.

In European radiology ; h5-index 62.0

OBJECTIVES : To utilize a deep learning model for automatic detection of abnormalities in chest CT images from COVID-19 patients and compare its quantitative determination performance with radiological residents.

METHODS : A deep learning algorithm consisted of lesion detection, segmentation, and location was trained and validated in 14,435 participants with chest CT images and definite pathogen diagnosis. The algorithm was tested in a non-overlapping dataset of 96 confirmed COVID-19 patients in three hospitals across China during the outbreak. Quantitative detection performance of the model was compared with three radiological residents with two experienced radiologists' reading reports as reference standard by assessing the accuracy, sensitivity, specificity, and F1 score.

RESULTS : Of 96 patients, 88 had pneumonia lesions on CT images and 8 had no abnormities on CT images. For per-patient basis, the algorithm showed superior sensitivity of 1.00 (95% confidence interval (CI) 0.95, 1.00) and F1 score of 0.97 in detecting lesions from CT images of COVID-19 pneumonia patients. While for per-lung lobe basis, the algorithm achieved a sensitivity of 0.96 (95% CI 0.94, 0.98) and a slightly inferior F1 score of 0.86. The median volume of lesions calculated by algorithm was 40.10 cm3. An average running speed of 20.3 s ± 5.8 per case demonstrated the algorithm was much faster than the residents in assessing CT images (all p < 0.017). The deep learning algorithm can also assist radiologists make quicker diagnosis (all p < 0.0001) with superior diagnostic performance.

CONCLUSIONS : The algorithm showed excellent performance in detecting COVID-19 pneumonia on chest CT images compared with resident radiologists.

KEY POINTS : • The higher sensitivity of deep learning model in detecting COVID-19 pneumonia were found compared with radiological residents on a per-lobe and per-patient basis. • The deep learning model improves diagnosis efficiency by shortening processing time. • The deep learning model can automatically calculate the volume of the lesions and whole lung.

Ni Qianqian, Sun Zhi Yuan, Qi Li, Chen Wen, Yang Yi, Wang Li, Zhang Xinyuan, Yang Liu, Fang Yi, Xing Zijian, Zhou Zhen, Yu Yizhou, Lu Guang Ming, Zhang Long Jiang

2020-Jul-02

COVID-19, Deep learning, Diagnosis, Multidetector computed tomography, Pneumonia

oncology Oncology

Prediction of Nephrotoxicity Associated With Cisplatin-Based Chemotherapy in Testicular Cancer Patients.

In JNCI cancer spectrum

Background : Cisplatin-based chemotherapy may induce nephrotoxicity. This study presents a random forest predictive model that identifies testicular cancer patients at risk of nephrotoxicity before treatment.

Methods : Clinical data and DNA from saliva samples were collected for 433 patients. These were genotyped on Illumina HumanOmniExpressExome-8 v1.2 (964 193 markers). Clinical and genomics-based random forest models generated a risk score for each individual to develop nephrotoxicity defined as a 20% drop in isotopic glomerular filtration rate during chemotherapy. The area under the receiver operating characteristic curve was the primary measure to evaluate models. Sensitivity, specificity, and positive and negative predictive values were used to discuss model clinical utility.

Results : Of 433 patients assessed in this study, 26.8% developed nephrotoxicity after bleomycin-etoposide-cisplatin treatment. Genomic markers found to be associated with nephrotoxicity were located at NAT1, NAT2, and the intergenic region of CNTN6 and CNTN4. These, in addition to previously associated markers located at ERCC1, ERCC2, and SLC22A2, were found to improve predictions in a clinical feature-trained random forest model. Using only clinical data for training the model, an area under the receiver operating characteristic curve of 0.635 (95% confidence interval [CI] = 0.629 to 0.640) was obtained. Retraining the classifier by adding genomics markers increased performance to 0.731 (95% CI = 0.726 to 0.736) and 0.692 (95% CI = 0.688 to 0.696) on the holdout set.

Conclusions : A clinical and genomics-based machine learning algorithm improved the ability to identify patients at risk of nephrotoxicity compared with using clinical variables alone. Novel genetics associations with cisplatin-induced nephrotoxicity were found for NAT1, NAT2, CNTN6, and CNTN4 that require replication in larger studies before application to clinical practice.

Garcia Sara L, Lauritsen Jakob, Zhang Zeyu, Bandak Mikkel, Dalgaard Marlene D, Nielsen Rikke L, Daugaard Gedske, Gupta Ramneek

2020-Jun

Public Health Public Health

COVID-19: A master stroke of Nature.

In AIMS public health

This article presents the status of countries affected by COVID-19 (as of mid-May 2020) and their preparedness to combat the after-effects of the pandemic. The report also provides an analysis of how human behavior may have triggered such a global pandemic and why humans need to consider living sustainably to make our future world livable for all. COVID-19 originated in the city of Wuhan, China in December 2019. As of mid-May, it has spread to 213 countries and territories worldwide. The World Health Organization has declared COVID-19 a global pandemic, with a death toll of over 300,000 to date. The U.S. is currently the most impacted country. Collaborative efforts of scientists and politicians across the world will be needed to better plan and utilize global health resources to combat this global pandemic. Machine learning-based prediction models could also help by identifying potential COVID-19-prone areas and individuals. The cause of the emergence of COVID-19 is still a matter of research; however, one consistent theme is humanity's unsustainable behavior. By sustainably interacting with nature, humans may have avoided this pandemic. If unsustainable human practices are not controlled through education, awareness, behavioral change, as well as sustainable policy creation and enforcement, there could be several such pandemics in our future.

Singh Sushant K

2020

COVID-19, Nature, coronavirus, pandemic, public health, sustainability

Ophthalmology Ophthalmology

Application of artificial intelligence in anterior segment ophthalmic diseases: diversity and standardization.

In Annals of translational medicine

Artificial intelligence (AI) based on machine learning (ML) and deep learning (DL) techniques has gained tremendous global interest in this era. Recent studies have demonstrated the potential of AI systems to provide improved capability in various tasks, especially in image recognition field. As an image-centric subspecialty, ophthalmology has become one of the frontiers of AI research. Trained on optical coherence tomography, slit-lamp images and even ordinary eye images, AI can achieve robust performance in the detection of glaucoma, corneal arcus and cataracts. Moreover, AI models based on other forms of data also performed satisfactorily. Nevertheless, several challenges with AI application in ophthalmology have also arisen, including standardization of data sets, validation and applicability of AI models, and ethical issues. In this review, we provided a summary of the state-of-the-art AI application in anterior segment ophthalmic diseases, potential challenges in clinical implementation and our prospects.

Wu Xiaohang, Liu Lixue, Zhao Lanqin, Guo Chong, Li Ruiyang, Wang Ting, Yang Xiaonan, Xie Peichen, Liu Yizhi, Lin Haotian

2020-Jun

Artificial intelligence (AI), anterior eye segment, computer-assisted diagnosis, machine learning (ML)

General General

A review of the application of deep learning in medical image classification and segmentation.

In Annals of translational medicine

Big medical data mainly include electronic health record data, medical image data, gene information data, etc. Among them, medical image data account for the vast majority of medical data at this stage. How to apply big medical data to clinical practice? This is an issue of great concern to medical and computer researchers, and intelligent imaging and deep learning provide a good answer. This review introduces the application of intelligent imaging and deep learning in the field of big data analysis and early diagnosis of diseases, combining the latest research progress of big data analysis of medical images and the work of our team in the field of big data analysis of medical imagec, especially the classification and segmentation of medical images.

Cai Lei, Gao Jingyang, Zhao Di

2020-Jun

Big medical data, classification, deep learning, object detection, segmentation

Ophthalmology Ophthalmology

The combination of brain-computer interfaces and artificial intelligence: applications and challenges.

In Annals of translational medicine

Brain-computer interfaces (BCIs) have shown great prospects as real-time bidirectional links between living brains and actuators. Artificial intelligence (AI), which can advance the analysis and decoding of neural activity, has turbocharged the field of BCIs. Over the past decade, a wide range of BCI applications with AI assistance have emerged. These "smart" BCIs including motor and sensory BCIs have shown notable clinical success, improved the quality of paralyzed patients' lives, expanded the athletic ability of common people and accelerated the evolution of robots and neurophysiological discoveries. However, despite technological improvements, challenges remain with regard to the long training periods, real-time feedback, and monitoring of BCIs. In this article, the authors review the current state of AI as applied to BCIs and describe advances in BCI applications, their challenges and where they could be headed in the future.

Zhang Xiayin, Ma Ziyue, Zheng Huaijin, Li Tongkeng, Chen Kexin, Wang Xun, Liu Chenting, Xu Linxi, Wu Xiaohang, Lin Duoru, Lin Haotian

2020-Jun

Brain-computer interface (BCI), artificial intelligence (AI), computational neuroscience, encoding and decoding, machine learning, prosthesis

Ophthalmology Ophthalmology

Using artificial intelligence to improve medical services in China.

In Annals of translational medicine

Artificial intelligence (AI) is one hotspot of research in the field of modern medical technology. Medical AI has been applied to various areas and has two main branches including virtual and physical. Recently, Chinese State Council issued a guideline on developing AI and indicated that the widespread application of AI will improve the level of precision in medical services and achieve the intelligent medical care. Medical resources, especially the high-quality resources, are deficient across the entire health service system in China. AI technologies, such that virtual AI and telemedical technology, are expected to overcome the current limitations of the distribution of medical resources and relieve the pressure associated with obtaining high-quality health care.

Li Ruiyang, Yang Yahan, Wu Shaolong, Huang Kai, Chen Weirong, Liu Yizhi, Lin Haotian

2020-Jun

Artificial intelligence (AI), China, health care, medical resource

Ophthalmology Ophthalmology

Differentiate cavernous hemangioma from schwannoma with artificial intelligence (AI).

In Annals of translational medicine

Background : Cavernous hemangioma and schwannoma are tumors that both occur in the orbit. Because the treatment strategies of these two tumors are different, it is necessary to distinguish them at treatment initiation. Magnetic resonance imaging (MRI) is typically used to differentiate these two tumor types; however, they present similar features in MRI images which increases the difficulty of differential diagnosis. This study aims to devise and develop an artificial intelligence framework to improve the accuracy of clinicians' diagnoses and enable more effective treatment decisions by automatically distinguishing cavernous hemangioma from schwannoma.

Methods : Material: As the study materials, we chose MRI images as the study materials that represented patients from diverse areas in China who had been referred to our center from more than 45 different hospitals. All images were initially acquired on films, which we scanned into digital versions and recut. Finally, 11,489 images of cavernous hemangioma (from 33 different hospitals) and 3,478 images of schwannoma (from 16 different hospitals) were collected. Labeling: All images were labeled using standard anatomical knowledge and pathological diagnosis. Training: Three types of models were trained in sequence (a total of 96 models), with each model including a specific improvement. The first two model groups were eye- and tumor-positioning models designed to reduce the identification scope, while the third model group consisted of classification models trained to make the final diagnosis.

Results : First, internal four-fold cross-validation processes were conducted for all the models. During the validation of the first group, the 32 eye-positioning models were able to localize the position of the eyes with an average precision of 100%. In the second group, the 28 tumor-positioning models were able to reach an average precision above 90%. Subsequently, using the third group, the accuracy of all 32 tumor classification models reached nearly 90%. Next, external validation processes of 32 tumor classification models were conducted. The results showed that the accuracy of the transverse T1-weighted contrast-enhanced sequence reached 91.13%; the accuracy of the remaining models was significantly lower compared with the ground truth.

Conclusions : The findings of this retrospective study show that an artificial intelligence framework can achieve high accuracy, sensitivity, and specificity in automated differential diagnosis between cavernous hemangioma and schwannoma in a real-world setting, which can help doctors determine appropriate treatments.

Bi Shaowei, Chen Rongxin, Zhang Kai, Xiang Yifan, Wang Ruixin, Lin Haotian, Yang Huasheng

2020-Jun

Artificial intelligence (AI), differential diagnosis, multicenter

General General

A cell-level quality control workflow for high-throughput image analysis.

In BMC bioinformatics

BACKGROUND : Image-based high throughput (HT) screening provides a rich source of information on dynamic cellular response to external perturbations. The large quantity of data generated necessitates computer-aided quality control (QC) methodologies to flag imaging and staining artifacts. Existing image- or patch-level QC methods require separate thresholds to be simultaneously tuned for each image quality metric used, and also struggle to distinguish between artifacts and valid cellular phenotypes. As a result, extensive time and effort must be spent on per-assay QC feature thresholding, and valid images and phenotypes may be discarded while image- and cell-level artifacts go undetected.

RESULTS : We present a novel cell-level QC workflow built on machine learning approaches for classifying artifacts from HT image data. First, a phenotype sampler based on unlabeled clustering collects a comprehensive subset of cellular phenotypes, requiring only the inspection of a handful of images per phenotype for validity. A set of one-class support vector machines are then trained on each biologically valid image phenotype, and used to classify individual objects in each image as valid cells or artifacts. We apply this workflow to two real-world large-scale HT image datasets and observe that the ratio of artifact to total object area (ARcell) provides a single robust assessment of image quality regardless of the underlying causes of quality issues. Gating on this single intuitive metric, partially contaminated images can be salvaged and highly contaminated images can be excluded before image-level phenotype summary, enabling a more reliable characterization of cellular response dynamics.

CONCLUSIONS : Our cell-level QC workflow enables identification of artificial cells created not only by staining or imaging artifacts but also by the limitations of image segmentation algorithms. The single readout ARcell that summaries the ratio of artifacts contained in each image can be used to reliably rank images by quality and more accurately determine QC cutoff thresholds. Machine learning-based cellular phenotype clustering and sampling reduces the amount of manual work required for training example collection. Our QC workflow automatically handles assay-specific phenotypic variations and generalizes to different HT image assays.

Qiu Minhua, Zhou Bin, Lo Frederick, Cook Steven, Chyba Jason, Quackenbush Doug, Matzen Jason, Li Zhizhong, Mak Puiying Annie, Chen Kaisheng, Zhou Yingyao

2020-Jul-02

Cell-level quality control, CellProfiler, High throughput image analysis, Image quality measurement, Machine learning

Public Health Public Health

Coronavirus disease 2019 (COVID-19): an evidence map of medical literature.

In BMC medical research methodology

BACKGROUND : Since the beginning of the COVID-19 outbreak in December 2019, a substantial body of COVID-19 medical literature has been generated. As of June 2020, gaps and longitudinal trends in the COVID-19 medical literature remain unidentified, despite potential benefits for research prioritisation and policy setting in both the COVID-19 pandemic and future large-scale public health crises.

METHODS : In this paper, we searched PubMed and Embase for medical literature on COVID-19 between 1 January and 24 March 2020. We characterised the growth of the early COVID-19 medical literature using evidence maps and bibliometric analyses to elicit cross-sectional and longitudinal trends and systematically identify gaps.

RESULTS : The early COVID-19 medical literature originated primarily from Asia and focused mainly on clinical features and diagnosis of the disease. Many areas of potential research remain underexplored, such as mental health, the use of novel technologies and artificial intelligence, pathophysiology of COVID-19 within different body systems, and indirect effects of COVID-19 on the care of non-COVID-19 patients. Few articles involved research collaboration at the international level (24.7%). The median submission-to-publication duration was 8 days (interquartile range: 4-16).

CONCLUSIONS : Although in its early phase, COVID-19 research has generated a large volume of publications. However, there are still knowledge gaps yet to be filled and areas for improvement for the global research community. Our analysis of early COVID-19 research may be valuable in informing research prioritisation and policy planning both in the current COVID-19 pandemic and similar global health crises.

Liu Nan, Chee Marcel Lucas, Niu Chenglin, Pek Pin Pin, Siddiqui Fahad Javaid, Ansah John Pastor, Matchar David Bruce, Lam Sean Shao Wei, Abdullah Hairil Rizal, Chan Angelique, Malhotra Rahul, Graves Nicholas, Koh Mariko Siyue, Yoon Sungwon, Ho Andrew Fu Wah, Ting Daniel Shu Wei, Low Jenny Guek Hong, Ong Marcus Eng Hock

2020-Jul-02

COVID-19, Coronavirus, Evidence gap map, Review, SARS-CoV-2

Ophthalmology Ophthalmology

Quantitative analysis of functional filtering bleb size using Mask R-CNN.

In Annals of translational medicine

Background : Deep learning has had a large effect on medical fields, including ophthalmology. The goal of this study was to quantitatively analyze the functional filtering bleb size with Mask R-CNN.

Methods : This observational study employed eighty-three images of post-trabeculectomy functional filtering blebs. The images were divided into training and test groups and scored according to the Indiana Bleb Appearance Grading Scale (IBAGS) system. Then, 70 images from the training group were used to train an automatic detection system based on Mask R-CNN and perform a quantitative analysis of the function bleb size. Thirteen images from the test group were used to evaluate the model. During the training process, left and right image-flipping algorithms were used for data augmentation. Finally, the correlation between the functional filtering bleb area and the intraocular pressure (IOP) was analyzed.

Results : The 83 functional filtering blebs have similar morphological features. According to IBAGS, the functional filtering blebs have a high incidence of E1/E2, H1/H2, and V0/V1. Our Mask R-CNN-based model using the selected parameters achieves good results on the training group after a 200-epoch training process. All the Intersection over Union (IoU) scores exceeded 93% on the test group. The Spearman correlation coefficient between the area of functional filtering blebs and the IOP value was -0.757 (P<0.05).

Conclusions : Deep learning is a powerful tool for quantitatively analyzing the functional filtering bleb size. This technique is suitable for use in monitoring post-trabeculectomy filtering blebs in the future.

Wang Tao, Zhong Lei, Yuan Jing, Wang Ting, Yin Shiyi, Sun Yi, Liu Xing, Liu Xun, Ling Shiqi

2020-Jun

Glaucoma, Mask R-CNN, deep learning, filtering bleb, trabeculectomy

Ophthalmology Ophthalmology

Attitudes towards medical artificial intelligence talent cultivation: an online survey study.

In Annals of translational medicine

Background : To investigate the attitude and formal suggestions on talent cultivation in the field of medical artificial intelligence (AI).

Methods : An electronic questionnaire was sent to both medical-related field or non-medical field population using the WenJuanXing web-application via social media. The questionnaire was designed to collect: (I) demographic information; (II) perception of medical AI; (III) willingness to participate in the medical AI related teaching activities; (IV) teaching content of medical AI; (V) the role of medical AI teaching; (VI) future career planning. Respondents' anonymity was ensured.

Results : A total of 710 respondents provided valid answers to the questionnaire (57.75% medical related, 42.25% non-medical). About 73.8% of respondents acquired related information from network and social platform. More than half the respondents had basic perception of AI applicational scenarios and specialties in medicine, meanwhile were willing to participate in related general science activities (conference and lectures). Respondents from medical healthcare related fields, with high academic qualifications of male ones demonstrated showed significant better understanding and stronger willingness (P<0.05). The majority agreed medical AI courses should be set as major elective (42.82%) during undergraduate stages (89.58%) involving medical and computer science contents. An overwhelming majority of respondents (>80%) acknowledged the potential roles of medical AI teaching. Surgeon, ophthalmologist, physicians and researchers are the top tier considerations for ideal career regardless of AI influence. Radiology and clinical laboratory subjects are more preferred considering the development of medical AI (P>0.05).

Conclusions : The potential role of medical AI talent cultivation is widely acknowledged by public. Medical related professions demonstrated higher level of perception and stronger willingness for medical AI educational events. Merging subjects as radiology and clinical laboratory subjects are preferred with broad talents demands and bright prospects.

Yun Dongyuan, Xiang Yifan, Liu Zhenzhen, Lin Duoru, Zhao Lanqin, Guo Chong, Xie Peichen, Lin Haotian, Liu Yizhi, Zou Yuxian, Wu Xiaohang

2020-Jun

Medical artificial intelligence (Medical AI), survey study, talent cultivation

General General

Fast screening for children's developmental language disorders via comprehensive speech ability evaluation-using a novel deep learning framework.

In Annals of translational medicine

Background : Developmental language disorders (DLDs) are the most common developmental disorders in children. For screening DLDs, speech ability (SA) is one of the most important indicators.

Methods : In this paper, we propose a solution for the fast screening of children's DLDs based on a comprehensive SA evaluation and a deep framework of machine learning. Fast screening is crucial for promoting the prevalence and practicality of DLD screening which in turn is important for the treatment of DLDs and related social and behavioral abnormalities (e.g., dyslexia and autism). Our solution is focused on addressing the drawbacks existing in the previous DLD screening methods which include test failure due to text-based inducing material design and illiteracy of most young children, incomplete language evaluation indicators, and professional-reliant evaluation procedures. First, to avoid test failure, a novel comprehensive inducing procedure (CIP) with non-text (i.e., audio-visual) stimulus materials was designed that could cover a large range of modalities to adequately explore the comprehensive SA of the subjects. Second, to address incomplete language evaluation, a set of comprehensive evaluation indicators with full consideration of the characteristics of the children's language acquisition is proposed; furthermore, to break the professional-reliant limitation, we specifically designed a deep framework for fast and accurate screening.

Results : Experimental results showed that the proposed deep framework is effective and professional with a 92.6% accuracy on DLD screening. Additionally, to provide a benchmark for the novel problem, we provide a CIP dataset with about 2,200 responses from over 200 children, which may also be useful for further DLD studies and insightful for the fast screening design of other behavioral abnormalities.

Conclusions : Fast screening of children's DLDs can be achieved at accuracy up to 92.6% by our proposed deep learning framework. For successful fast screening, an elaborated CIP with corresponding comprehensive evaluating indicators is necessary to be designed for children suspected to have DLDs.

Zhang Xing, Qin Feng, Chen Zelin, Gao Leyan, Qiu Guoxin, Lu Shuo

2020-Jun

Developmental language disorders (DLDs), developmental language disorder indicators (DLD indicators), fast screening

Ophthalmology Ophthalmology

Deep learning-based automated diagnosis of fungal keratitis with in vivo confocal microscopy images.

In Annals of translational medicine

Background : The aim of this study was to develop an intelligent system based on a deep learning algorithm for automatically diagnosing fungal keratitis (FK) in in vivo confocal microscopy (IVCM) images.

Methods : A total of 2,088 IVCM images were included in the training dataset. The positive group consisted of 688 images with fungal hyphae, and the negative group included 1,400 images without fungal hyphae. A total of 535 images in the testing dataset were not included in the training dataset. Deep Residual Learning for Image Recognition (ResNet) was used to build the intelligent system for diagnosing FK automatically. The system was verified by external validation in the testing dataset using the area under the receiver operating characteristic curve (AUC), accuracy, specificity and sensitivity.

Results : In the testing dataset, 515 images were diagnosed correctly and 20 were misdiagnosed (including 6 with fungal hyphae and 14 without). The system achieved an AUC of 0.9875 with an accuracy of 0.9626 in detecting fungal hyphae. The sensitivity of the system was 0.9186, with a specificity of 0.9834. When 349 diabetic patients were included in the training dataset, 501 images were diagnosed correctly and thirty-four were misdiagnosed (including 4 with fungal hyphae and 30 without). The AUC of the system was 0.9769. The accuracy, specificity and sensitivity were 0.9364, 0.9889 and 0.8256, respectively.

Conclusions : The intelligent system based on a deep learning algorithm exhibited satisfactory diagnostic performance and effectively classified FK in various IVCM images. The context of this deep learning automated diagnostic system can be extended to other types of keratitis.

Lv Jian, Zhang Kai, Chen Qing, Chen Qi, Huang Wei, Cui Ling, Li Min, Li Jianyin, Chen Lifei, Shen Chaolan, Yang Zhao, Bei Yixuan, Li Lanjian, Wu Xiaohang, Zeng Siming, Xu Fan, Lin Haotian

2020-Jun

In vivo confocal microscopy (IVCM), convolutional neural network, deep learning algorithm, fungal keratitis (FK)

Ophthalmology Ophthalmology

Automatic identification of myopia based on ocular appearance images using deep learning.

In Annals of translational medicine

Background : Myopia is the leading cause of visual impairment and affects millions of children worldwide. Timely and annual manual optometric screenings of the entire at-risk population improve outcomes, but screening is challenging due to the lack of availability and training of assessors and the economic burden imposed by the screenings. Recently, deep learning and computer vision have shown powerful potential for disease screening. However, these techniques have not been applied to large-scale myopia screening using ocular appearance images.

Methods : We trained a deep learning system (DLS) for myopia detection using 2,350 ocular appearance images (processed by 7,050 pictures) from children aged 6 to 18. Myopia is defined as a spherical equivalent refraction (SER) [the algebraic sum in diopters (D), sphere + 1/2 cylinder] ≤-0.5 diopters. Saliency maps and gradient class activation maps (grad-CAM) were used to highlight the regions recognized by VGG-Face. In a prospective clinical trial, 100 ocular appearance images were used to assess the performance of the DLS.

Results : The area under the curve (AUC), sensitivity, and specificity of the DLS were 0.9270 (95% CI, 0.8580-0.9610), 81.13% (95% CI, 76.86-5.39%), and 86.42% (95% CI, 82.30-90.54%), respectively. Based on the saliency maps and grad-CAMs, the DLS mainly focused on eyes, especially the temporal sclera, rather than the background or other parts of the face. In the prospective clinical trial, the DLS achieved better diagnostic performance than the ophthalmologists in terms of sensitivity [DLS: 84.00% (95% CI, 73.50-94.50%) versus ophthalmologists: 64.00% (95% CI, 48.00-72.00%)] and specificity [DLS: 74.00% (95% CI, 61.40-86.60%) versus ophthalmologists: 53.33% (95% CI, 30.00-66.00%)]. We also computed AUC subgroups stratified by sex and age. DLS achieved comparable AUCs for children of different sexes and ages.

Conclusions : This study for the first time applied deep learning to myopia screening using ocular images and achieved high screening accuracy, enabling the remote monitoring of the refractive status in children with myopia. The application of our DLS will directly benefit public health and relieve the substantial burden imposed by myopia-associated visual impairment or blindness.

Yang Yahan, Li Ruiyang, Lin Duoru, Zhang Xiayin, Li Wangting, Wang Jinghui, Guo Chong, Li Jianyin, Chen Chuan, Zhu Yi, Zhao Lanqin, Lin Haotian

2020-Jun

Deep learning, myopia

General General

An artificial intelligence model for the simulation of visual effects in patients with visual field defects.

In Annals of translational medicine

Background : This study aimed to simulate the visual field (VF) effects of patients with VF defects using deep learning and computer vision technology.

Methods : We collected 3,660 Humphrey visual fields (HVFs) as data samples, including 3,263 reliable 24-2 HVFs. The convolutional neural network (CNN) analyzed and converted the grayscale map of reliable samples into structured data. The artificial intelligence (AI) simulations were developed using computer vision technology. In statistical analyses, the pilot study determined 687 reliable samples to conduct clinical trials, and the two independent sample t-tests were used to calculate the difference of the cumulative gray values. Three volunteers evaluated the matching degree of shape and position between the grayscale map and the AI simulation, which was graded from 0 to100 scores. Based on the average ranking, the proportion of good and excellent grades was determined, and thus the reliability of the AI simulations was assessed.

Results : The reliable samples in the experimental data consisted of 1,334 normal samples and 1,929 abnormal samples. Based on the existing mature CNN model, the fully connected layer was integrated to analyze the VF damage parameters of the input images, and the prediction accuracy of the damage type of the VF defects was up to 89%. By mapping the area and damage information in the VF damage parameter quintuple data set into the real scene image and adjusting the darkening effect according to the damage parameter, the visual effects in patients were simulated in the real scene image. In the clinical validation, there was no statistically significant difference in the cumulative gray value (P>0.05). The good and excellent proportion of the average scores reached 96.0%, thus confirming the accuracy of the AI model.

Conclusions : An AI model with high accuracy was established to simulate the visual effects in patients with VF defects.

Zhou Zhan, Li Bingbing, Su Jinyu, Fan Xianming, Chen Liang, Tang Song, Zheng Jianqing, Zhang Tong, Meng Zhiyong, Chen Zhimeng, Deng Hongwei, Hu Jianmin, Zhao Jun

2020-Jun

Computer vision technology, artificial intelligence (AI), visual field defects, visual simulation

Ophthalmology Ophthalmology

Application of neural network model in assisting device fitting for low vision patients.

In Annals of translational medicine

Background : To explore the application of neural network models in artificial intelligence (AI)-aided devices fitting for low vision patients.

Methods : The data of 836 visually impaired people were collected in southwestern Fujian from May 2014 to May 2017. After a full eye examination, 629 low vision patients were selected from this group. Based on the visual functions, rehabilitation needs, and living quality scores of the selected patients, the professionals chose assistive devices that were the best fit for the patients. The data of these three factors were then subjected to the quantitative analysis, and the results were digitized and labeled. The final datasets were used to train a fully connected deep neural networks to obtain an AI-aided model for assistive device fitting.

Results : In this study, the main causes of low vision in southwestern Fujian were congenital diseases, among which congenital cataract was the most common. During the low vision AI-aided devices fitting, we found that the intermediate distance magnifier was suitable for the largest number of patients. Through quantitative analysis of the research results, it was found that AI-aided devices fitting was closely related to visual function, rehabilitation needs and quality of life. If this complex relationship can be mapped into the neural network model, AI-aided device fitting can be realized. We built a fully connected neural network model for AI-aided device fitting. The input of the model was the characteristic data of low vision patients, and the output was the forecast of suitable devices. When the threshold of the model was 0.4, the accuracy was about 80% and the F1 value was about 0.31. This threshold can be used as the classification judgment threshold of the model.

Conclusions : Low vision AI-aided device fitting is closely related to visual function, rehabilitation needs, and quality of life scores. The neural network model based on full connection can achieve high accuracy in AI-aided devices fitting. It has a great impact on clinical application.

Dai Bingfa, Yu Yang, Huang Lijuan, Meng Zhiyong, Chen Liang, Luo Hongxia, Chen Ting, Chen Xuelan, Ye Wenwen, Yan Yuyuan, Cai Chi, Zheng Jianqing, Zhao Jun, Dong Liquan, Hu Jianmin

2020-Jun

Low vision, artificial intelligence-aided assistive device fitting (AI-aided assistive device fitting), neural network model

Radiology Radiology

Deep learning LI-RADS grading system based on contrast enhanced multiphase MRI for differentiation between LR-3 and LR-4/LR-5 liver tumors.

In Annals of translational medicine

Background : To develop a deep learning (DL) method based on multiphase, contrast-enhanced (CE) magnetic resonance imaging (MRI) to distinguish Liver Imaging Reporting and Data System (LI-RADS) grade 3 (LR-3) liver tumors from combined higher-grades 4 and 5 (LR-4/LR-5) tumors for hepatocellular carcinoma (HCC) diagnosis.

Methods : A total of 89 untreated LI-RADS-graded liver tumors (35 LR-3, 14 LR-4, and 40 LR-5) were identified based on the radiology MRI interpretation reports. Multiphase 3D T1-weighted gradient echo imaging was acquired at six time points: pre-contrast, four phases immediately post-contrast, and one hepatobiliary phase after intravenous injection of gadoxetate disodium. Image co-registration was performed across all phases on the center tumor slice to correct motion. A rectangular tumor box centered on the tumor area was drawn to extract subset tumor images for each imaging phase, which were used as the inputs to a convolutional neural network (CNN). The pre-trained AlexNet CNN model underwent transfer learning using liver MRI data for LI-RADS tumor grade classification. The output probability number closer to 1 or 0 indicated a higher possibility of being combined LR-4/LR-5 tumor or LR-3 tumor, respectively. Five-fold cross validation was used for training (60% dataset), validation (20%) and testing processes (20%).

Results : The DL CNN model for LI-RADS grading using inputs of multiphase liver MRI data acquired at three time points (pre-contrast, arterial, and washout phase) achieved a high accuracy of 0.90, sensitivity of 1.0, precision of 0.835, and AUC of 0.95 with reference to the expert human radiologist report. The CNN output of probability provided radiologists a confidence level of the model's grading for each liver lesion.

Conclusions : An AlexNet CNN model for LI-RADS grading of liver lesions provided diagnostic performance comparable to radiologists and offered valuable clinical guidance for differentiating intermediate LR-3 liver lesions from more-likely malignant LR-4/LR-5 lesions in HCC diagnosis.

Wu Yunan, White Gregory M, Cornelius Tyler, Gowdar Indraneel, Ansari Mohammad H, Supanich Mark P, Deng Jie

2020-Jun

Deep learning (DL), LI-RADS, MRI, convolutional neural network (CNN), hepatocellular carcinoma (HCC)

Ophthalmology Ophthalmology

Artificial intelligence-tutoring problem-based learning in ophthalmology clerkship.

In Annals of translational medicine

Background : Artificial intelligence (AI) is an increasingly popular tool in medical investigations. However, AI's potential of aiding medical teaching has not been explored. This study aimed to evaluate the effectiveness of AI-tutoring problem-based-learning (PBL) in ophthalmology clerkship and to assess the student evaluations of this module.

Methods : Thirty-eight Grade-two students in ophthalmology clerkship at Sun Yat-Sen University were randomly assigned to two groups. In Group A, students learned congenital cataracts through an AI-tutoring PBL module by exploring and operating an AI diagnosis platform. In Group B, students learned congenital cataracts through traditional lecture given with the same faculty. The improvement in student performance was evaluated by comparing the pre- and post-lecture scores of a specific designed test using paired-T tests. Student evaluations of AI-tutoring PBL were measured by a 17-item questionnaire.

Results : The post-lecture scores were significantly higher than the pre-lecture scores in both groups (Group A: P<0.0001, Group B: P<0.0001). The improvement of group A in the part of sign and diagnosis test (Part I) was more significant than that of group B (P=0.016). However, there was no difference in the improvement in the part of treatment plan test (Part II) between two groups (P=0.556). Overall, all respondents were satisfied and agreed that AI-tutoring PBL was helpful, effective, motive and beneficial to help develop critical and creative thinking.

Conclusions : The application of AI-tutoring PBL into ophthalmology clerkship improved students' performance and satisfaction. AI-tutoring PBL teaching showed advantage in promoting students' understanding of signs of diseases. The instructors play an indispensable role in AI-tutoring PBL curriculum.

Wu Dongxuan, Xiang Yifan, Wu Xiaohang, Yu Tongyong, Huang Xiucheng, Zou Yuxian, Liu Zhenzhen, Lin Haotian

2020-Jun

Artificial-intelligence, ophthalmology clerkship, problem-based learning

Dermatology Dermatology

Web-based study on Chinese dermatologists' attitudes towards artificial intelligence.

In Annals of translational medicine

Background : Artificial intelligence (AI) has become a powerful tool and is attracting more attention in the field of medicine. There are a number of AI studies focusing on skin diseases, and there are many AI products that have been applied in dermatology. However, the attitudes of dermatologists, specifically those from China, towards AI, is not clear as few, if any studies have focused on this issue.

Methods : A web-based questionnaire was designed by experts from the Chinese Skin Image Database (CSID) and published on the UMER Doctor platform (an online learning platform for dermatologists developed by the Shanghai Wheat Color Intelligent Technology Company, China). A total of 1,228 Chinese dermatologists were recruited and provided answers to the questionnaire online. The differences of dermatologists' attitudes towards AI among the different groups (stratified by age, gender, hospital level, education degree, professional title, and hospital ownership) were compared by using the Mann-Whitney U test and the Kruskal-Wallis H test. The correlations between stratified factors and dermatologists' attitudes towards AI were calculated by using the Spearman's rank correlation test. SPSS (version 22.0) was utilized for all analyses. A two-sided P value <0.05 was considered statistically significant in all analyses.

Results : A total of 1,228 Chinese dermatologists from 30 provinces, autonomous regions, municipalities, and other regions (including Hong Kong, Macau, and Taiwan) participated in this survey. The dermatologists who participated acquired AI-related information mainly through the Internet, meetings or forums, and 70.51% of participated dermatologists acquired AI-related information by two or more approaches. In total, 99.51% of participated dermatologists pay attention (general, passive-active, and active attention) to information pertaining to AI. Stratified analyses revealed statistically significant differences in their attention levels (unconcerned, general, passive-active, and active attention) to AI-related information by gender, hospital level, education degree, and professional title (P values ≤1.79E-02). In total, 95.36% of the participated dermatologists thought the role of AI to be in "assisting the daily diagnosis and treatment activities for dermatologists". Stratified analyses about the thought of AI roles (unconcerned, useless, assist, and replace) showed that there was no statistically significant difference except for the hospital level (P value =4.09E-03). The correlations between stratified factors with attention levels and the opinions of AI roles showed extremely weak correlations. Furthermore, 64.17% of participated dermatologists thought secondary hospitals in China are in most need of the application AI, and 91.78% of participated dermatologists thought the priority implementation of AI should be in skin tumors.

Conclusions : The majority of Chinese dermatologists are interested in AI information and acquired information about AI through a variety of approaches. Nearly all dermatologists are attentive to information on AI and think the role of AI is in "assisting the daily diagnosis and treatment activities for dermatologists". Future AI implementation should be primarily focused on skin tumors and utilized in in secondary hospitals.

Shen Changbing, Li Chengxu, Xu Feng, Wang Ziyi, Shen Xue, Gao Jing, Ko Randy, Jing Yan, Tang Xiaofeng, Yu Ruixing, Guo Junhu, Xu Feng, Meng Rusong, Cui Yong

2020-Jun

Chinese dermatologists, artificial intelligence (AI), attitudes

Surgery Surgery

Public Perceptions of Artificial Intelligence and Robotics in Medicine.

In Journal of endourology

OBJECTIVE : To understand better the public perception and comprehension of medical technology such as artificial intelligence and robotic surgery. Additionally, to identify sensitivity to their use in order to ensure acceptability and quality of counseling.

SUBJECTS AND METHODS : A survey was conducted on a convenience sample of visitors to the MN Minnesota State Fair (n= 264). Participants were randomized to receive one of two similar surveys. In the first a diagnosis was made by a physician and in the second by an AI application in order to compare confidence in human and computer-based diagnosis.

RESULTS : The median age of participants was 45 (IQR 28-59), 58% were female (n=154) vs. 42% male (n=110), 69% had completed at least a bachelor's degree, 88% were Caucasian (n=233) vs. 12% ethnic minorities (n=31) and were from 12 states, most from the Upper Midwest. Participants had nearly equal trust in AI vs. physician diagnoses, However, they were significantly more likely to trust an AI diagnosis of cancer over a doctor's diagnosis when responding to the version of the survey that suggested an AI could make medical diagnosis (p = 9.32e-06). Though 55% of respondents (n=145) reported they were uncomfortable with automated robotic surgery the majority of the individuals surveyed, (88%), mistakenly believed that partially autonomous surgery was already happening. Almost all (94%, n=249) stated they would be willing to pay for a review of medical imaging by an AI if available.

CONCLUSION : Most participants express confidence in AI providing medical diagnoses, sometimes even over human physicians. Participants generally expressed concern with surgical AI, but mistakenly believe it is already being performed. As AI applications increase in medical practice, health care providers should be cognizant of the potential amount of misinformation and sensitivity that patients have to how such technology is represented.

Stai Bethany, Heller Nick, McSweeney Sean, Rickman Jack, Blake Paul, Vasdev Ranveer, Edgerton Zach, Tejpaul Resha, Peterson Matt, Rosenberg Joel, Kalapara Arveen, Regmi Subodh, Papanikolopoulos Nikolaos, Weight Christopher J

2020-Jul-01

General General

A talker-independent deep learning algorithm to increase intelligibility for hearing-impaired listeners in reverberant competing talker conditions.

In The Journal of the Acoustical Society of America

Deep learning based speech separation or noise reduction needs to generalize to voices not encountered during training and to operate under multiple corruptions. The current study provides such a demonstration for hearing-impaired (HI) listeners. Sentence intelligibility was assessed under conditions of a single interfering talker and substantial amounts of room reverberation. A talker-independent deep computational auditory scene analysis (CASA) algorithm was employed, in which talkers were separated and dereverberated in each time frame (simultaneous grouping stage), then the separated frames were organized to form two streams (sequential grouping stage). The deep neural networks consisted of specialized convolutional neural networks, one based on U-Net and the other a temporal convolutional network. It was found that every HI (and normal-hearing, NH) listener received algorithm benefit in every condition. Benefit averaged across all conditions ranged from 52 to 76 percentage points for individual HI listeners and averaged 65 points. Further, processed HI intelligibility significantly exceeded unprocessed NH intelligibility. Although the current utterance-based model was not implemented as a real-time system, a perspective on this important issue is provided. It is concluded that deep CASA represents a powerful framework capable of producing large increases in HI intelligibility for potentially any two voices.

Healy Eric W, Johnson Eric M, Delfarah Masood, Wang DeLiang

2020-Jun

General General

A wide dataset of ear shapes and pinna-related transfer functions generated by random ear drawings.

In The Journal of the Acoustical Society of America

Head-related transfer function individualization is a key matter in binaural synthesis. However, currently available databases are limited in size compared to the high dimensionality of the data. In this paper, the process of generating a synthetic dataset of 1000 ear shapes and matching sets of pinna-related transfer functions (PRTFs), named WiDESPREaD (wide dataset of ear shapes and pinna-related transfer functions obtained by random ear drawings), is presented and made freely available to other researchers. Contributions in this article are threefold. First, from a proprietary dataset of 119 three-dimensional left-ear scans, a matching dataset of PRTFs was built by performing fast-multipole boundary element method (FM-BEM) calculations. Second, the underlying geometry of each type of high-dimensional data was investigated using principal component analysis. It was found that this linear machine-learning technique performs better at modeling and reducing data dimensionality on ear shapes than on matching PRTF sets. Third, based on these findings, a method was devised to generate an arbitrarily large synthetic database of PRTF sets that relies on the random drawing of ear shapes and subsequent FM-BEM computations.

Guezenoc Corentin, Séguier Renaud

2020-Jun

General General

Predicting ultrasound tongue image from lip images using sequence to sequence learning.

In The Journal of the Acoustical Society of America

Understanding the dynamic system that produces speech is essential to advancing speech science, and several simultaneous sensory streams can be leveraged to describe the process. As the tongue functional deformation correlates with the lip's shapes of the speaker, this paper aims to explore the association between them. The problem is formulated as a sequence to sequence learning task and a deep neural network is trained using unlabeled lip videos to predict an upcoming ultrasound tongue image sequence. Experimental results show that the machine learning model can predict the tongue's motion with satisfactory performance, which demonstrates that the learned neural network can build the association between two imaging modalities.

Xu Kele, Zhao Jianqiao, Zhu Boqing, Zhao Chaojie

2020-Jun

Radiology Radiology

Differentiation Between Anteroposterior and Posteroanterior Chest X-Ray View Position With Convolutional Neural Networks.

In RoFo : Fortschritte auf dem Gebiete der Rontgenstrahlen und der Nuklearmedizin

PURPOSE :  Detection and validation of the chest X-ray view position with use of convolutional neural networks to improve meta-information for data cleaning within a hospital data infrastructure.

MATERIAL AND METHODS :  Within this paper we developed a convolutional neural network which automatically detects the anteroposterior and posteroanterior view position of a chest radiograph. We trained two different network architectures (VGG variant and ResNet-34) with data published by the RSNA (26 684 radiographs, class distribution 46 % AP, 54 % PA) and validated these on a self-compiled dataset with data from the University Hospital Essen (4507, radiographs, class distribution 55 % PA, 45 % AP) labeled by a human reader. For visualization and better understanding of the network predictions, a Grad-CAM was generated for each network decision. The network results were evaluated based on the accuracy, the area under the curve (AUC), and the F1-score against the human reader labels. Also a final performance comparison between model predictions and DICOM labels was performed.

RESULTS :  The ensemble models reached accuracy and F1-scores greater than 95 %. The AUC reaches more than 0.99 for the ensemble models. The Grad-CAMs provide insight as to which anatomical structures contributed to a decision by the networks which are comparable with the ones a radiologist would use. Furthermore, the trained models were able to generalize over mislabeled examples, which was found by comparing the human reader labels to the predicted labels as well as the DICOM labels.

CONCLUSION :  The results show that certain incorrectly entered meta-information of radiological images can be effectively corrected by deep learning in order to increase data quality in clinical application as well as in research.

KEY POINTS :   · The predictions for both view positions are accurate with respect to external validation data.. · The networks based their decisions on anatomical structures and key points that were in-line with prior knowledge and human understanding.. · Final models were able to detect labeling errors within the test dataset..

CITATION FORMAT : · Hosch R, Kroll L, Nensa F et al. Differentiation Between Anteroposterior and Posteroanterior Chest X-Ray View Position With Convolutional Neural Networks. Fortschr Röntgenstr 2020; DOI: 10.1055/a-1183-5227.

Hosch René, Kroll Lennard, Nensa Felix, Koitka Sven

2020-Jul-02

Public Health Public Health

Artificial Intelligence and Hypertension: Recent Advances and Future Outlook.

In American journal of hypertension ; h5-index 46.0

Prevention and treatment of hypertension (HTN) is a challenging public health problem. Recent evidence suggests that artificial intelligence (AI) has potential to be a promising tool for reducing the global burden of HTN, and furthering precision medicine related to cardiovascular (CV) diseases including HTN. Since AI can stimulate human thought processes and learning with complex algorithms and advanced computational power, AI can be applied to multimodal and big data, including genetics, epigenetics, proteomics, metabolomics, CV imaging, socioeconomic, behavioral and environmental factors. AI demonstrates the ability to identify risk factors and phenotypes of HTN, predict the risk of incident HTN, diagnose HTN, estimate blood pressure (BP), develop novel cuffless methods for BP measurement, and comprehensively identify factors associated with treatment adherence and success. Moreover, AI has also been used to analyze data from major randomized controlled trials exploring different BP targets to uncover previously undescribed factors associated with cardiovascular outcomes. Therefore, AI-integrated HTN care has the potential to transform clinical practice by incorporating personalized prevention and treatment approaches, such as determining optimal and patient specific BP goals, identifying the most effective antihypertensive medication regimen for an individual, and developing interventions targeting modifiable risk factors. Although the role of AI in HTN has been increasingly recognized over the past decade, it remains in its infancy, and future studies with big data analysis and N-of-1 study design are needed to further demonstrate the applicability of AI in HTN prevention and treatment.

Chaikijurajai Thanat, Laffin Luke J, Tang W H Wilson

2020-Jul-02

Artificial Intelligence, Blood Pressure Measurement, Deep Learning, Hypertension, Machine Learning

Radiology Radiology

Deep learning-based pulmonary nodule detection: Effect of slab thickness in maximum intensity projections at the nodule candidate detection stage.

In Computer methods and programs in biomedicine

BACKGROUND AND OBJECTIVE : To investigate the effect of the slab thickness in maximum intensity projections (MIPs) on the candidate detection performance of a deep learning-based computer-aided detection (DL-CAD) system for pulmonary nodule detection in CT scans.

METHODS : The public LUNA16 dataset includes 888 CT scans with 1186 nodules annotated by four radiologists. From those scans, MIP images were reconstructed with slab thicknesses of 5 to 50 mm (at 5 mm intervals) and 3 to 13 mm (at 2 mm intervals). The architecture in the nodule candidate detection part of the DL-CAD system was trained separately using MIP images with various slab thicknesses. Based on ten-fold cross-validation, the sensitivity and the F2 score were determined to evaluate the performance of using each slab thickness at the nodule candidate detection stage. The free-response receiver operating characteristic (FROC) curve was used to assess the performance of the whole DL-CAD system that took the results combined from 16 MIP slab thickness settings.

RESULTS : At the nodule candidate detection stage, the combination of results from 16 MIP slab thickness settings showed a high sensitivity of 98.0% with 46 false positives (FPs) per scan. Regarding a single MIP slab thickness of 10 mm, the highest sensitivity of 90.0% with 8 FPs/scan was reached before false positive reduction. The sensitivity increased (82.8% to 90.0%) for slab thickness of 1 to 10 mm and decreased (88.7% to 76.6%) for slab thickness of 15-50 mm. The number of FPs was decreasing with increasing slab thickness, but was stable at 5 FPs/scan at a slab thickness of 30 mm or more. After false positive reduction, the DL-CAD system, utilizing 16 MIP slab thickness settings, had the sensitivity of 94.4% with 1 FP/scan.

CONCLUSIONS : The utilization of multi-MIP images could improve the performance at the nodule candidate detection stage, even for the whole DL-CAD system. For a single slab thickness of 10 mm, the highest sensitivity for pulmonary nodule detection was reached at the nodule candidate detection stage, similar to the slab thickness usually applied by radiologists.

Zheng Sunyi, Cui Xiaonan, Vonder Marleen, Veldhuis Raymond N J, Ye Zhaoxiang, Vliegenthart Rozemarijn, Oudkerk Matthijs, van Ooijen Peter M A

2020-Jun-20

Artificial intelligence, Computer-assisted, Diagnosis, Maximum intensity projection, Pulmonary Nodules, Tomography, X-ray computed

Radiology Radiology

Development of a self-constrained 3D DenseNet model in automatic detection and segmentation of nasopharyngeal carcinoma using magnetic resonance images.

In Oral oncology

OBJECTIVES : We aimed to develop a dual-task model to detect and segment nasopharyngeal carcinoma (NPC) automatically in magnetic resource images (MRI) based on deep learning method, since the differential diagnosis of NPC and atypical benign hyperplasia was difficult and the radiotherapy target contouring of NPC was labor-intensive.

MATERIALS AND METHODS : A self-constrained 3D DenseNet (SC-DenseNet) architecture was improved using separated training and validation sets. A total of 4100 individuals were finally enrolled and split into the training, validation and test sets at a proximate ratio of 8:1:1 using simple randomization. The diagnostic metrics of the established model against experienced radiologists was compared in the test set. The dice similarity coefficient (DSC) of manual and model-defined tumor region was used to evaluate the efficacy of segmentation.

RESULTS : Totally, 3142 nasopharyngeal carcinoma (NPC) and 958 benign hyperplasia were included. The SC-DenseNet model showed encouraging performance in detecting NPC, attained a higher overall accuracy, sensitivity and specificity than those of the experienced radiologists (97.77% vs 95.87%, 99.68% vs 99.24% and 91.67% vs 85.21%, respectively). Moreover, the model also exhibited promising performance in automatic segmentation of tumor region in NPC, with an average DSC at 0.77 ± 0.07 in the test set.

CONCLUSIONS : The SC-DenseNet model showed competence in automatic detection and segmentation of NPC in MRI, indicating the promising application value as an assistant tool in clinical practice, especially in screening project.

Ke Liangru, Deng Yishu, Xia Weixiong, Qiang Mengyun, Chen Xi, Liu Kuiyuan, Jing Bingzhong, He Caisheng, Xie Chuanmiao, Guo Xiang, Lv Xing, Li Chaofeng

2020-Jun-29

Automatic segmentation, Deep learning, Detection, Magnetic resource images, Nasopharyngeal carcinoma

General General

Estimation of a priori signal-to-noise ratio using neurograms for speech enhancement.

In The Journal of the Acoustical Society of America

In statistical-based speech enhancement algorithms, the a priori signal-to-noise ratio (SNR) must be estimated to calculate the required spectral gain function. This paper proposes a method to improve this estimation using features derived from the neural responses of the auditory-nerve (AN) system. The neural responses, interpreted as a neurogram (NG), are simulated for noisy speech using a computational model of the AN system with a range of characteristic frequencies (CFs). Two machine learning algorithms were explored to train the estimation model based on NG features: support vector regression and a convolutional neural network. The proposed estimator was placed in a common speech enhancement system, and three conventional spectral gain functions were employed to estimate the enhanced signal. The proposed method was tested using the NOIZEUS database at different SNR levels, and various speech quality and intelligibility measures were employed for performance evaluation. The a priori SNR estimated from NG features achieved better quality and intelligibility scores than that of recent estimators, especially for highly distorted speech and low SNR values.

Jassim Wissam A, Harte Naomi

2020-Jun

General General

Monitoring of soluble pectin content in orange juice by means of MIR and TD-NMR spectroscopy combined with machine learning.

In Food chemistry

This study represents a rapid and non-destructive approach based on mid-infrared (MIR) spectroscopy, time domain nuclear magnetic resonance (TD-NMR), and machine learning classification models (ML) for monitoring soluble pectin content (SPC) changes in orange juice. Current reference methods of SPC in orange juice are laborious, requiring several extractions with successive adjustments hindering rapid process intervention. 109 fresh orange juices samples, representing different harvests, were analysed using MIR, TD-NMR and reference method. Unsupervised algorithms were applied for natural clustering of MIR and TD-NMR data in two groups. Analyses of variance of the two MIR and TD-NMR datasets show that only the MIR groups were different at 95% confidence for SPC average values. This approach allows build classification models based on MIR data achieving 85% and 89% of accuracy. Results demonstrate that MIR/ML can be a suitable strategy for the quick assessment of SPC trends in orange juices.

Bizzani Marilia, William Menezes Flores Douglas, Alberto Colnago Luiz, David Ferreira Marcos

2020-Jun-23

Data science, MIR, Machine learning, Orange juice, Soluble pectin content (SPC), TD-NMR

Surgery Surgery

Novel Technology to Capture Objective Data from Patients' Recovery from Laparoscopic Endometriosis Surgery.

In Journal of minimally invasive gynecology ; h5-index 40.0

STUDY OBJECTIVE : To assess the feasibility of a non-contact radio sensor as an objective measurement tool to study postoperative recovery from endometriosis surgery.

DESIGN : Prospective cohort pilot study.

SETTING : Center for minimally-invasive gynecologic surgery at an academically-affiliated community hospital in conjunction with in-home monitoring.

PATIENTS : Patients over 18 years old who sleep independently and are scheduled to have laparoscopy for the diagnosis and treatment of suspected endometriosis.

INTERVENTIONS : A wireless, non-contact sensor, Emerald, was installed in the subjects' home and used to capture physiological signals without body contact. The device captured objective data about the patients' movement and sleep in their home for 5 weeks prior to surgery and approximately 5 weeks postoperatively. Subjects were concurrently asked to complete a daily pain assessment using a Numerical Rating System (NRS) and a free-text survey about their daily symptoms.

MEASUREMENTS AND MAIN RESULTS : Three women, aged 23-39, with mild to moderate endometriosis participated in the study. Emerald-derived sleep and wake times were contextualized and corroborated by select participant comments from retrospective surveys. Additionally, self-reported pain levels and one sleep variable, sleep onset to deep sleep time, showed a significant (p<0.01), positive correlation with next day pain scores in all three subjects: r=0.45, 0.50, and 0.55. In other words, the longer it took the subject to go from sleep onset to deep sleep, the higher their pain score the following day.

CONCLUSION : A patient's experience with pain is challenging to meaningfully quantify. This study highlights Emerald's unique ability to capture objective data in both pre-operative functioning and post-operative recovery in an endometriosis population. The utility of this uniquely objective data for the clinician - patient relationship is just beginning to be explored.

Loring Megan, Kabelac Zachary, Munir Usman, Yue Shichao, Ephraim Hannah Y, Rahul Hariharan, Isaacson Keith B, Griffith Linda G, Katabi Dina

2020-Jun-29

Digital, Machine Learning, Pain, Remote Sensing, Sleep

General General

Personalized surveillance for hepatocellular carcinoma in cirrhosis - using machine learning adapted to HCV status.

In Journal of hepatology ; h5-index 119.0

BACKGROUND AND AIMS : To develop algorithms based on machine learning predictive approaches to refine individualized predictions of hepatocellular carcinoma (HCC) risk according to HCV eradication in patients with cirrhosis included in the French ANRS CO12 CirVir cohort.

METHODS : Patients with compensated biopsy-proven HCV-cirrhosis were included in 35 centers and followed a semi-annual HCC surveillance program. Three prognostic models for HCC occurrence were built, using (1) Fine-Gray regression as a benchmark, (2) single decision tree (DT), and (3) random survival forest for competing risks survival (RSF). Model performance was evaluated from C-indexes validated externally in the ANRS CO22 Hepather cohort (N=668 enrolled between 08/2012-01/2014).

RESULTS : 836 patients were analyzed, among whom 156 (19%) developed HCC and 434 (52%) achieved sustained virological response (SVR) (median follow-up: 63 months). Fine-Gray regression models identified six independent predictors of HCC occurrence in patients before SVR: past excessive alcohol intake, genotype 1, elevated alpha-fetoprotein and GGT, low platelet count and albuminemia; and three in patients after SVR: elevated AST and low platelet count and PT. DT analysis confirmed these associations but revealed more complex interactions, yielding eight patient groups with differentiated cancer risks and varying predictors involved depending on SVR achievement. RSF ranked platelet count GGT, AFP and albuminemia as the most important predictors of HCC in non-SVR patients, and prothrombin time, ALT, age and platelet count after SVR achievement. Externally-validated C-indexes before/after SVR were 0.64/0.64 [Fine-Gray], 0.60/62 [DT] and 0.71/0.70 [RSF].

CONCLUSIONS : Risk factors for hepatocarcinogenesis differ according to SVR status. Machine learning algorithms can prove useful to individually assess HCC risk by revealing complex interactions between cancer predictors. Such approaches could help developing more cost-effective tailored surveillance programs.

Audureau Etienne, Carrat Fabrice, Layese Richard, Cagnot Carole, Asselah Tarik, Guyader Dominique, Larrey Dominique, De Lédinghen Victor, Ouzan Denis, Zoulim Fabien, Roulot Dominique, Tran Albert, Bronowicki Jean-Pierre, Zarski Jean-Pierre, Riachi Ghassan, Calès Paul, Péron Jean-Marie, Alric Laurent, Bourlière Marc, Mathurin Philippe, Blanc Jean-Frédéric, Abergel Armand, Chazouillères Olivier, Mallat Ariane, Grangé Jean-Didier, Attali Pierre, d’Alteroche Louis, Wartelle Claire, Dao Thông, Thabut Dominique, Pilette Christophe, Silvain Christine, Christidis Christos, Nguyen-Khac Eric, Bernard-Chabert Brigitte, Zucman David, Di Martino Vincent, Sutton Angela, Pol Stanislas, Nahon Pierre, Nahon Pierre, Marcellin Patrick, Guyader Dominique, Pol Stanislas, Fontaine Hélène, Larrey Dominique, De Lédinghen Victor, Ouzan Denis, Zoulim Fabien, Roulot Dominique, Tran Albert, Bronowicki Jean-Pierre, Zarski Jean-Pierre, Leroy Vincent, Riachi Ghassan, Calès Paul, Péron Jean-Marie, Alric Laurent, Bourlière Marc, Mathurin Philippe, Dharancy Sebastien, Blanc Jean-Frédéric, Abergel Armand, Chazouillères Olivier, Mallat Ariane, Grangé Jean-Didier, Attali Pierre, Louis d’Alteroche Wartelle, Claire Dao, Thông Thabut, Dominique Pilette, Christophe Silvain, Christine Christidis, Christos Nguyen-Khac, Eric Bernard-Chabert, Brigitte Zucman, David Di Martino

2020-Jun-29

HCV clearance, cirrhosis, liver cancer, machine learning, screening

General General

Multi-Task Learning Models for Predicting Active Compounds.

In Journal of biomedical informatics ; h5-index 55.0

The computational drug discovery methods can find potential drug-target interactions more efficiently and have been widely studied over past few decades. Such methods explore the relationship between the structural properties of compounds and their biological activity with the assumption that similar compounds tend to share similar biological targets and vice versa. However, traditional Quantitative Structure - Activity Relationship (QSAR) methods often do not have desired accuracy due to insufficient data of compound activity. In this paper, we focus on building Multi-Task Learning (MTL)-based QSAR models by considering multiple similar biological targets together and make shared information transfer across from one task to another, thereby improving not only the learning efficiency, but also the prediction accuracy. This paper selects 6 assay groups with similar biological targets from PubChem and builds their QSAR models with MTL simultaneously. According to the experiment results, our MTL-based QSAR models have better performance over traditional prominent machine learning algorithms and the improvements are even more obvious when other baseline models have low accuracy. The superiority of our models is also proved by student's t-test with level of significance 5%. Moreover, this paper also explores three different assumptions on the underlying pattern in the dataset and finds that the joint feature MTL models further improve the performance of the QSAR models and are more suitable for building QSAR models for multiple similar biological targets.

Zhao Zhili, Qin Jian, Gou Zhuoyue, Zhang Yanan, Yang Yi

2020-Jun-29

Drug Discovery, Machine Learning, Multi-task Learning, QSAR, Transfer Learning

General General

mycoCSM: using graph-based signatures to identify safe potent hits against Mycobacteria.

In Journal of chemical information and modeling

Development of new potent, safe drugs to treat Mycobacteria has proven to be challenging, with limited hit rates of initial screens restricting subsequent development efforts. Despite significant efforts and evolution of Quantitative Structure-Activity Relationship (QSAR) as well as machine learning-based models for computationally predicting molecule bioactivity, there is an unmet need for efficient and reliable methods for identifying biologically active compounds against mycobacterium that are also safe for humans. Here we have developed mycoCSM, a graph-based signature approach to rapidly identify compounds likely to be active against bacteria from the genus Mycobacterium, or against specific Mycobacteria species. mycoCSM was trained and validated on eight organism-specific and for the first time a general Mycobacteria data set, achieving correlation coefficients of up to 0.89 on cross-validation and 0.88 on independent blind tests, when predicting bioactivity in terms of Minimum Inhibitory Concentration (MIC). In addition, we also developed a predictor to identify those compounds likely to penetrate in necrotic tuberculosis foci, which achieved a correlation coefficient of 0.75. Together with a built-in estimator of the Maximum Tolerated Dose in humans, we believe this method will provide a valuable resource to enrich screening libraries with potent, safe molecules. To provide simple guidance in the selection of libraries with favourable anti-Mycobacteria properties, we have made mycoCSM freely available at: https://biosig.unimelb.edu.au/myco_csm.

Pires Douglas, Ascher David B

2020-Jul-02

General General

Single cell analyses and machine learning define hematopoietic progenitor and HSC-like cells derived from human PSCs.

In Blood ; h5-index 152.0

Haematopoietic stem and progenitor cells (HSPCs) develop through distinct waves at various anatomical sites during embryonic development. The in vitro differentiation of human pluripotent stem cells (hPSCs) is able to recapitulate some of these processes, however, it has proven difficult to generate functional haematopoietic stem cells (HSCs). To define the dynamics and heterogeneity of HSPCs that can be generated in vitro from hPSCs, we exploited single cell RNA sequencing (scRNAseq) in combination with single cell protein expression analysis. Bioinformatics analyses and functional validation defined the transcriptomes of naïve progenitors as well as erythroid, megakaryocyte and leukocyte-committed progenitors and we identified CD44, CD326, ICAM2/CD9 and CD18 as markers of these progenitors, respectively. Using an artificial neural network (ANN), that we trained on a scRNAseq derived from human fetal liver, we were able to identify a wide range of hPSCs-derived HPSC phenotypes, including a small group classified as HSCs. This transient HSC-like population decreased as differentiation proceeded and was completely missing in the dataset that had been generated using cells selected on the basis of CD43expression. By comparing the single cell transcriptome of in vitro-generated HSC-like cells with those generated within the fetal liver we identified transcription factors and molecular pathways that can be exploited in the future to improve the in vitro production of HSCs.

Fidanza Antonella, Stumpf Patrick Simon, Ramachandran Prakash, Tamagno Sara, Babtie Ann, Lopez-Yrigoyen Martha, Taylor Alice Helen, Easterbrook Jennifer, Henderson Beth, Axton Richard, Henderson Neil Cowan, Medvinsky Alexander, Ottersbach Katrin, Romanò Nicola, Forrester Lesley M

2020-Jul-02

General General

Detecting rare diseases in electronic health records using machine learning and knowledge engineering: Case study of acute hepatic porphyria.

In PloS one ; h5-index 176.0

BACKGROUND : With the growing adoption of the electronic health record (EHR) worldwide over the last decade, new opportunities exist for leveraging EHR data for detection of rare diseases. Rare diseases are often not diagnosed or delayed in diagnosis by clinicians who encounter them infrequently. One such rare disease that may be amenable to EHR-based detection is acute hepatic porphyria (AHP). AHP consists of a family of rare, metabolic diseases characterized by potentially life-threatening acute attacks and chronic debilitating symptoms. The goal of this study was to apply machine learning and knowledge engineering to a large extract of EHR data to determine whether they could be effective in identifying patients not previously tested for AHP who should receive a proper diagnostic workup for AHP.

METHODS AND FINDINGS : We used an extract of the complete EHR data of 200,000 patients from an academic medical center and enriched it with records from an additional 5,571 patients containing any mention of porphyria in the record. After manually reviewing the records of all 47 unique patients with the ICD-10-CM code E80.21 (Acute intermittent [hepatic] porphyria), we identified 30 patients who were positive cases for our machine learning models, with the rest of the patients used as negative cases. We parsed the record into features, which were scored by frequency of appearance and filtered using univariate feature analysis. We manually choose features not directly tied to provider attributes or suspicion of the patient having AHP. We trained on the full dataset, with the best cross-validation performance coming from support vector machine (SVM) algorithm using a radial basis function (RBF) kernel. The trained model was applied back to the full data set and patients were ranked by margin distance. The top 100 ranked negative cases were manually reviewed for symptom complexes similar to AHP, finding four patients where AHP diagnostic testing was likely indicated and 18 patients where AHP diagnostic testing was possibly indicated. From the top 100 ranked cases of patients with mention of porphyria in their record, we identified four patients for whom AHP diagnostic testing was possibly indicated and had not been previously performed. Based solely on the reported prevalence of AHP, we would have expected only 0.002 cases out of the 200 patients manually reviewed.

CONCLUSIONS : The application of machine learning and knowledge engineering to EHR data may facilitate the diagnosis of rare diseases such as AHP. Further work will recommend clinical investigation to identified patients' clinicians, evaluate more patients, assess additional feature selection and machine learning algorithms, and apply this methodology to other rare diseases. This work provides strong evidence that population-level informatics can be applied to rare diseases, greatly improving our ability to identify undiagnosed patients, and in the future improve the care of these patients and our ability study these diseases. The next step is to learn how best to apply these EHR-based machine learning approaches to benefit individual patients with a clinical study that provides diagnostic testing and clinical follow up for those identified as possibly having undiagnosed AHP.

Cohen Aaron M, Chamberlin Steven, Deloughery Thomas, Nguyen Michelle, Bedrick Steven, Meninger Stephen, Ko John J, Amin Jigar J, Wei Alex J, Hersh William

2020

General General

Benchmarking machine learning models on multi-centre eICU critical care dataset.

In PloS one ; h5-index 176.0

Progress of machine learning in critical care has been difficult to track, in part due to absence of public benchmarks. Other fields of research (such as computer vision and natural language processing) have established various competitions and public benchmarks. Recent availability of large clinical datasets has enabled the possibility of establishing public benchmarks. Taking advantage of this opportunity, we propose a public benchmark suite to address four areas of critical care, namely mortality prediction, estimation of length of stay, patient phenotyping and risk of decompensation. We define each task and compare the performance of both clinical models as well as baseline and deep learning models using eICU critical care dataset of around 73,000 patients. This is the first public benchmark on a multi-centre critical care dataset, comparing the performance of clinical gold standard with our predictive model. We also investigate the impact of numerical variables as well as handling of categorical variables on each of the defined tasks. The source code, detailing our methods and experiments is publicly available such that anyone can replicate our results and build upon our work.

Sheikhalishahi Seyedmostafa, Balaraman Vevake, Osmani Venet

2020

General General

Inferring transportation mode from smartphone sensors: Evaluating the potential of Wi-Fi and Bluetooth.

In PloS one ; h5-index 176.0

Understanding which transportation modes people use is critical for smart cities and planners to better serve their citizens. We show that using information from pervasive Wi-Fi access points and Bluetooth devices can enhance GPS and geographic information to improve transportation detection on smartphones. Wi-Fi information also improves the identification of transportation mode and helps conserve battery since it is already collected by most mobile phones. Our approach uses a machine learning approach to determine the mode from pre-prepocessed data. This approach yields an overall accuracy of 89% and average F1 score of 83% for inferring the three grouped modes of self-powered, car-based, and public transportation. When broken out by individual modes, Wi-Fi features improve detection accuracy of bus trips, train travel, and driving compared to GPS features alone and can substitute for GIS features without decreasing performance. Our results suggest that Wi-Fi and Bluetooth can be useful in urban transportation research, for example by improving mobile travel surveys and urban sensing applications.

Bjerre-Nielsen Andreas, Minor Kelton, Sapieżyński Piotr, Lehmann Sune, Lassen David Dreyer

2020

Surgery Surgery

EHAI: Enhanced Human Microbe-Disease Association Identification.

In Current protein & peptide science

Recently, an increasing number of biological and clinical reports have demonstrated that imbalance of microbial community has the ability to play important roles among several complex diseases in the level of human health. Having a good knowledge of discovery potential of microbe-disease relationships, which provides the ability to having better understanding on some issues, including disease pathology, further boosts disease diagnostics and prognostics, has been taken into account. Nevertheless, a few of computational approaches can meet the need of huge scale of microbe-disease association discovery. In this work, we proposed the EHAI model, which is Enhanced Human microbe-disease Association Identification. EHAI employed the microbe-disease associations, and then Gaussian interaction profile kernel similarity has been utilized to enhance the basic microbe-disease association. Actually, some known microbe-disease associations and a large amount of associations are still unavailable among the datasets. The 'super-microbe' and 'super-disease' were employed to enhance the model. Computational results demonstrated that such super-classes have the ability to be helpful to the performance of EHAI. Therefore, it is anticipated that EHAI can be treated as an important biological tool in this field.

Fan Ruizhi, Dong Chenhua, Song Hu, Xu Yixin, Shi Linsen, Xu Teng, Cao Meng, Jiang Tao, Song Jun

2020-Jul-02

Enhanced Similarity-based\n, association prediction, disease, machine learning

General General

Influence of acoustic habitat variation on Indo-Pacific humpback dolphin (Sousa chinensis) in shallow waters of Hainan Island, China.

In The Journal of the Acoustical Society of America

The Indo-Pacific humpback dolphin (IPHD, Sousa chinensis) is a coastal species inhabiting tropical and warm-temperate waters. The presence of this vulnerable dolphin was recently discovered in shallow waters southwest of Hainan Island, China. The influence of the acoustic habitat on the distribution and behavior of IPHD was investigated using an array of passive acoustic platforms (n = 6) that spanned more than 100 km of coastline during a 75-day monitoring period. Its presence was assessed within 19 215 five-min recordings by classifying echolocation clicks using machine learning techniques. Spectrogram analysis was applied to further investigate the acoustic behavior of IPHD and to identify other prominent sound sources. The variation in the ambient noise levels was also measured to describe the spatiotemporal patterns of the acoustic habitat among the different sampling sites. Social and feeding sounds of IPHD (whistles and click-series of pulsed sounds) were identified together with other biological sources (finless porpoise, soniferous fishes, and snapping shrimps) and anthropogenic activities (ship noise, explosions, and sonars). Distribution, acoustic behavior, and habitat use of this nearshore dolphin species were strongly influenced by the abundance of soniferous fishes, and under similar conditions, the species was more acoustically active in locations with lower noise levels.

Caruso Francesco, Dong Lijun, Lin Mingli, Liu Mingming, Xu Wanxue, Li Songhai

2020-Jun

General General

A Survey of Network Embedding for Drug Analysis and Prediction.

In Current protein & peptide science

Traditional network-based computational methods have shown good results in drug analysis and prediction. However, these methods are time consuming and lack universality, and it is difficult to exploit the auxiliary information of nodes and edges. Network embedding provides a promising way for alleviating the above problems by transforming network into a low-dimensional space while preserving network structure and auxiliary information. This thus facilitates the application of machine learning algorithms for subsequent processing. Network embedding has been introduced into drug analysis and prediction in the last few years, and has shown superior performance over traditional methods. However, there is no systematic review of this issue. This article offers a comprehensive survey of the primary network embedding methods and their applications in drug analysis and prediction. The network embedding technologies applied in homogeneous network and heterogeneous network are investigated and compared, including matrix decomposition, random walk, and deep learning. Especially, the Graph neural network (GNN) methods in deep learning are highlighted. Further, the applications of network embedding in drug similarity estimation, drug-target interaction prediction, adverse drug reactions prediction, protein function and therapeutic peptides prediction are discussed. Several future potential research directions are also discussed.

Liu Zhixian, Chen Qingfeng, Lan Wei, Liang Jiahai, Chen Yiping Pheobe, Chen Baoshan

2020-Jul-02

**, Network embedding, adverse drug reactions prediction, drug discovery, drug similarity estimation, drug-target prediction, protein function and therapeutic peptides prediction. **

General General

Using machine learning to quantify structural MRI neurodegeneration patterns of Alzheimer's disease into dementia score: Independent validation on 8,834 images from ADNI, AIBL, OASIS, and MIRIAD databases.

In Human brain mapping

Biomarkers for dementia of Alzheimer's type (DAT) are sought to facilitate accurate prediction of the disease onset, ideally predating the onset of cognitive deterioration. T1-weighted magnetic resonance imaging (MRI) is a commonly used neuroimaging modality for measuring brain structure in vivo, potentially providing information enabling the design of biomarkers for DAT. We propose a novel biomarker using structural MRI volume-based features to compute a similarity score for the individual's structural patterns relative to those observed in the DAT group. We employed ensemble-learning framework that combines structural features in most discriminative ROIs to create an aggregate measure of neurodegeneration in the brain. This classifier is trained on 423 stable normal control (NC) and 330 DAT subjects, where clinical diagnosis is likely to have the highest certainty. Independent validation on 8,834 unseen images from ADNI, AIBL, OASIS, and MIRIAD Alzheimer's disease (AD) databases showed promising potential to predict the development of DAT depending on the time-to-conversion (TTC). Classification performance on stable versus progressive mild cognitive impairment (MCI) groups achieved an AUC of 0.81 for TTC of 6 months and 0.73 for TTC of up to 7 years, achieving state-of-the-art results. The output score, indicating similarity to patterns seen in DAT, provides an intuitive measure of how closely the individual's brain features resemble the DAT group. This score can be used for assessing the presence of AD structural atrophy patterns in normal aging and MCI stages, as well as monitoring the progression of the individual's brain along with the disease course.

Popuri Karteek, Ma Da, Wang Lei, Beg Mirza Faisal

2020-Jul-02

“Alzheimers disease”, cross-database independent validation, “dementia of Alzheimers type”, dementia score, disease progression, ensemble learning, longitudinal diagnostic stratification, magnetic resonance imaging, probabilistic classifier, prognosis prediction

General General

Machine learning and natural language processing in psychotherapy research: Alliance as example use case.

In Journal of counseling psychology

Artificial intelligence generally and machine learning specifically have become deeply woven into the lives and technologies of modern life. Machine learning is dramatically changing scientific research and industry and may also hold promise for addressing limitations encountered in mental health care and psychotherapy. The current paper introduces machine learning and natural language processing as related methodologies that may prove valuable for automating the assessment of meaningful aspects of treatment. Prediction of therapeutic alliance from session recordings is used as a case in point. Recordings from 1,235 sessions of 386 clients seen by 40 therapists at a university counseling center were processed using automatic speech recognition software. Machine learning algorithms learned associations between client ratings of therapeutic alliance exclusively from session linguistic content. Using a portion of the data to train the model, machine learning algorithms modestly predicted alliance ratings from session content in an independent test set (Spearman's ρ = .15, p < .001). These results highlight the potential to harness natural language processing and machine learning to predict a key psychotherapy process variable that is relatively distal from linguistic content. Six practical suggestions for conducting psychotherapy research using machine learning are presented along with several directions for future research. Questions of dissemination and implementation may be particularly important to explore as machine learning improves in its ability to automate assessment of psychotherapy process and outcome. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Goldberg Simon B, Flemotomos Nikolaos, Martinez Victor R, Tanana Michael J, Kuo Patty B, Pace Brian T, Villatte Jennifer L, Georgiou Panayiotis G, Van Epps Jake, Imel Zac E, Narayanan Shrikanth S, Atkins David C

2020-Jul

General General

The self-congruity effect of music.

In Journal of personality and social psychology ; h5-index 80.0

Music is a universal phenomenon that has existed in every known culture around the world. It plays a prominent role in society by shaping sociocultural interactions between groups and individuals, and by influencing their emotional and intellectual life. Here, we provide evidence for a new theory on musical preferences. Across three studies we show that people prefer the music of artists who have publicly observable personalities ("personas") similar to their own personality traits (the "self-congruity effect of music"). Study 1 (N = 6,279) and Study 2 (N = 75,296) show that the public personality of artists correlates with the personality of their listeners. Study 3 (N = 4,995) builds on this by showing that the fit between the personality of the listener and the artist predicts musical preferences incremental to the fit for gender, age, and even the audio features of music. Our findings are largely consistent across two methodological approaches to operationalizing an artist's public personality: (a) the public personality as reported by the artist's fans, and (b) the public personality as predicted by machine learning on the basis of the artist's lyrics. We discuss the importance of the self-congruity effect of music in the context of group-level process theories and adaptionist accounts of music. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Greenberg David M, Matz Sandra C, Schwartz H Andrew, Fricke Kai R

2020-Jul-02

General General

Psychometric and machine learning approaches for diagnostic assessment and tests of individual classification.

In Psychological methods

Assessments are commonly used to make a decision about an individual, such as grade placement, treatment assignment, job selection, or to inform a diagnosis. A psychometric approach to classify respondents based on the assessment would aggregate items into a score, and then each respondent's score is compared to a cut score. In contrast, a machine learning approach to classify respondents would build a model to predict the probability of belonging to a specific class from assessment items, and then respondents are classified based on their predicted probability of belonging to that class. It remains unclear whether psychometric and machine learning methods have comparable classification accuracy or if 1 method is preferable in all or some situations. In the context of diagnostic assessment, this study used Monte Carlo simulation methods to compare the classification accuracy of psychometric and machine learning methods as a function of the diagnosis-test correlation, prevalence, sample size, and the structure of the diagnostic assessment. Results suggest that machine learning models using logistic regression or random forest could have comparable classification accuracy to the psychometric methods using estimated item response theory scores. Therefore, machine learning models could provide a viable alternative for classification when psychometric methods are not feasible. Methods are illustrated with an empirical example predicting an oppositional defiant disorder diagnosis from a behavior disorders scale in children of age seven. Strengths and limitations for each of the methods are examined, and the overlap between the field of machine learning and psychometrics is discussed. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Gonzalez Oscar

2020-Jul-02

General General

Machine Learning-Based Optoacoustic Tissue Classification Method for Laser Osteotomes Using an Air-Coupled Transducer.

In Lasers in surgery and medicine

BACKGROUND AND OBJECTIVES : Using lasers instead of mechanical tools for bone cutting holds many advantages, including functional cuts, contactless interaction, and faster wound healing. To fully exploit the benefits of lasers over conventional mechanical tools, a real-time feedback to classify tissue is proposed.

STUDY DESIGN/MATERIALS AND METHODS : In this paper, we simultaneously classified five tissue types-hard and soft bone, muscle, fat, and skin from five proximal and distal fresh porcine femurs-based on the laser-induced acoustic shock waves (ASWs) generated. For laser ablation, a nanosecond frequency-doubled Nd:YAG laser source at 532 nm and a microsecond Er:YAG laser source at 2940 nm were used to create 10 craters on the surface of each proximal and distal femur. Depending on the application, the Nd:YAG or Er:YAG can be used for bone cutting. For ASW recording, an air-coupled transducer was placed 5 cm away from the ablated spot. For tissue classification, we analyzed the measured acoustics by looking at the amplitude-frequency band of 0.11-0.27 and 0.27-0.53 MHz, which provided the least average classification error for Er:YAG and Nd:YAG, respectively. For data reduction, we used the amplitude-frequency band as an input of the principal component analysis (PCA). On the basis of PCA scores, we compared the performance of the artificial neural network (ANN), the quadratic- and Gaussian-support vector machine (SVM) to classify tissue types. A set of 14,400 data points, measured from 10 craters in four proximal and distal femurs, was used as training data, while a set of 3,600 data points from 10 craters in the remaining proximal and distal femur was considered as testing data, for each laser.

RESULTS : The ANN performed best for both lasers, with an average classification error for all tissues of 5.01 ± 5.06% and 9.12 ± 3.39%, using the Nd:YAG and Er:YAG lasers, respectively. Then, the Gaussian-SVM performed better than the quadratic SVM during the cutting with both lasers. The Gaussian-SVM yielded average classification errors of 15.17 ± 13.12% and 16.85 ± 7.59%, using the Nd:YAG and Er:YAG lasers, respectively. The worst performance was achieved with the quadratic-SVM with a classification error of 50.34 ± 35.04% and 69.96 ± 25.49%, using the Nd:YAG and Er:YAG lasers.

CONCLUSION : We foresee using the ANN to differentiate tissues in real-time during laser osteotomy. Lasers Surg. Med. © 2020 Wiley Periodicals LLC.

Nguendon Kenhagho Hervé, Canbaz Ferda, Gomez Alvarez-Arenas Tomas E, Guzman Raphael, Cattin Philippe, Zam Azhar

2020-Jul-02

acoustic shock signal, artificial network machine, laser ablation, principal component analysis, support vector machine, tissue classification

General General

Above and beyond "Above and beyond the concrete".

In The Behavioral and brain sciences

The commentaries address our view of abstraction, our ontology of abstract entities, and our account of predictive cognition as relying on relatively concrete simulation or relatively abstract theory-based inference. These responses revisit classic questions concerning mental representation and abstraction in the context of current models of predictive cognition. The counter arguments to our article echo: constructivist theories of knowledge, "neat" approaches in artificial intelligence and decision theory, neo-empiricist models of concepts, and externalist views of cognition. We offer several empirical predictions that address points of contention and that highlight the generative potential of our model.

Gilead Michael, Trope Yaacov, Liberman Nira

2020-Jun-19

General General

A Transfer Learning Study of Gas Adsorption in Metal-Organic Frameworks.

In ACS applied materials & interfaces ; h5-index 147.0

Metal-organic frameworks (MOFs) are a class of materials promising for gas adsorption due to their highly tunable nano-porous structures and host-guest interactions. While machine learning (ML) has been leveraged to aid the design or screen of MOFs for different purposes, the needs of big data are not always met, limiting the applicability of ML models trained against small data sets. In this work, we introduce an inductive transfer learning technique to improve the accuracy and applicability of ML models trained with small amount of MOF adsorption data. This technique leverages potentially shareable knowledge from a source task to improve the models on the target tasks. As demonstrations, a deep neural network (DNN) trained on H2 adsorption data with 13,506 MOF structures at 100 bar and 243 K is used as the source task. When transferring knowledge from the source task to H2 adsorption at 100 bar and 130 K (one target task), the predictive accuracy on target task was improved from 0.960 (direct training) to 0.991 (transfer learning). We also tested transfer learning across different gas species (i.e. from H2 to CH4), with predictive accuracy of CH4 adsorption being improved from 0.935 (direct training) to 0.980 (transfer learning). More importantly, transfer learning is shown to effectively improve the models on the target tasks with low accuracy from direct training. However, when transferring the knowledge from the source task to Xe/Kr adsorption, the transfer learning does not improve the predictive accuracy, which is attributed to the lack of common descriptors that is key to the underlying knowledge.

Ma Ruimin, Colon Yamil J, Luo Tengfei

2020-Jul-02

Radiology Radiology

Automated labeling of the airway tree in terms of lobes based on deep learning of bifurcation point detection.

In Medical & biological engineering & computing ; h5-index 32.0

This paper presents an automatic lobe-based labeling of airway tree method, which can detect the bifurcation points for reconstructing and labeling the airway tree from a computed tomography image. A deep learning-based network structure is designed to identify the four key bifurcation points. Then, based on the detected bifurcation points, the entire airway tree is reconstructed by a new region-growing method. Finally, with the basic airway tree anatomy and topology knowledge, individual branches of the airway tree are classified into different categories in terms of pulmonary lobes. There are several advantages in our method such as the detection of the bifurcation points does not depend on the segmentation of airway tree and only four bifurcation points need to be manually labeled for each sample to prepare the training dataset. The segmentation of airway tree is guided by the detected points, which overcomes the difficulty of manual seed selection of conventional region-growing algorithm. In addition, the bifurcation points can help analyze the tree structure, which provides a basis for effective airway tree labeling. Experimental results show that our method is fast, stable, and the accuracy of our method is 97.85%, which is higher than that of the traditional skeleton-based method. Graphical Abstract The pipeline of our proposed lobe-based airway tree labeling method. Given a raw CT volume, a neural network structure is designed to predict major bifurcation points of airway tree. Based on the detected points, airway tree is reconstructed and labeled in terms of lobes.

Wang Manyang, Jin Renchao, Jiang Nanchuan, Liu Hong, Jiang Shan, Li Kang, Zhou XueXin

2020-Jul-02

Airway tree, Automated labeling, Bifurcation points, Deep learning–based network

General General

Global characteristics and trends of research on construction dust: based on bibliometric and visualized analysis.

In Environmental science and pollution research international

The booming construction industry has led to many environmental and occupational health and safety problems. Construction dust caused irreversible damage to the health of frontline workers and polluted the surrounding air environment, which has attracted the attention of researchers and practitioners. In this study, to systematically sort and analyze the distribution of construction dust (CD) research, its hot areas, and the evolution of its fronts, papers with "construction dust" as the subject term in the Web of Science Core Collection Database since 2010 are visually analyzed using CiteSpace. The characteristics of these papers, including the quantity trend, quality, author group, affiliated institution type, and journal type, are summarized, and keyword co-appearance and paper co-citation knowledge maps are produced. The results show that (1) China is the backbone of CD research, and the research results account for a considerable proportion of the total. (2) Respiratory dust and atmospheric aerosols, marble dust, PM2.5, and other hot issues have always attracted international attention. And exposure assessment and spatial distribution were the main focuses in the study of CD. (3) The direction of CD research will explore in a more subtle and intelligent direction in the future, for example, monitoring and control equipment under the technical support of big data technology and machine learning and face recognition. By combining bibliometrics with a systematic review, we aim to analyze the research foci and future development direction deeply, providing scholars with a comprehensive view of the field.

Guo Ping, Tian Wei, Li Huimin, Zhang Guangmin, Li Jianhui

2020-Jul-01

Bibliometrics, CiteSpace, Construction dust (CD), Knowledge mapping

General General

Forecasting of extreme flood events using different satellite precipitation products and wavelet-based machine learning methods.

In Chaos (Woodbury, N.Y.)

An accurate and timely forecast of extreme events can mitigate negative impacts and enhance preparedness. Real-time forecasting of extreme flood events with longer lead times is difficult for regions with sparse rain gauges, and in such situations, satellite precipitation could be a better alternative. Machine learning methods have shown promising results for flood forecasting with minimum variables indicating the underlying nonlinear complex hydrologic system. Integration of machine learning methods in extreme event forecasting motivates us to develop reliable flood forecasting models that are simple, accurate, and applicable in data scare regions. In this study, we develop a forecasting method using the satellite precipitation product and wavelet-based machine learning models. We test the proposed approach in the flood-prone Vamsadhara river basin, India. The validation results show that the proposed method is promising and has the potential to forecast extreme flood events with longer lead times in comparison with the other benchmark models.

Yeditha Pavan Kumar, Kasi Venkatesh, Rathinasamy Maheswaran, Agarwal Ankit

2020-Jun

Public Health Public Health

Application of machine learning algorithm and modified high resolution DNA melting curve analysis for molecular subtyping of Salmonella isolates from various epidemiological backgrounds in northern Thailand.

In World journal of microbiology & biotechnology

Food poisoning from consumption of food contaminated with non-typhoidal Salmonella spp. is a global problem. A modified high resolution DNA melting curve analysis (m-HRMa) was introduced to provide effective discrimination among closely related HRM curves of amplicons generated from selected Salmonella genome sequences enabled Salmonella spp. to be classified into discrete clusters. Combination of m-HRMa with serogroup identification (ms-HRMa) helped improve assignment of Salmonella spp. into clusters. In addition, a machine learning (dynamic time warping) algorithm (DTW) was employed to provide a simple and rapid protocol for clustering analysis as well as to create phylogeny tree of Salmonella strains (n = 40) collected from home, farms and slaughter houses in northern Thailand. Applications of DTW and ms-HRMa clustering analyses were capable of generating molecular signatures of the Salmonella isolates, resulting in 25 ms-HRM and 28 DTW clusters compared to 14 clusters from a standard HRM analysis, and the combination of both analyses permitted molecular subtyping of each Salmonella isolate. Results from DTW and ms-HRMa cluster analyses were in good agreement with that obtained from enterobacterial repetitive intergenic consensus sequence PCR clustering. While conventional serotyping of Clusters 1 and 2 revealed six different Salmonella serotypes, the majority being S. Weltevraden, the new Salmonella subtyping protocol identified five S. Weltevraden subtypes with S.Weltevreden subtype DTW4-M1 being predominant. Based on knowledge of the sources of Salmonella subtypes, transmission of S. Weltevraden in northern Thailand was likely to be farm-to-farm through contaminated chicken stool. In conclusion, the rapid, robust and specific Salmonella subtyping developed in the study can be performed in a local setting, enabling swift control and preventive measures to be initiated against potential epidemics of salmonellosis.

Wisittipanit Nuttachat, Pulsrikarn Chaiwat, Wutthiosot Saranya, Pinmongkhonkul Sitthisak, Poonchareon Kritchai

2020-Jul-02

High resolution DNA melting curve analysis, Machine learning (dynamic time warping) algorithm, Molecular epidemiological screening, Molecular subtyping, Salmonella

General General

Thermal Imaging - An Emerging Modality for Breast Cancer Detection: A Comprehensive Review.

In Journal of medical systems ; h5-index 48.0

Breast cancer is not preventable. To reduce the death rate and improve the survival chances of breast cancer patients, early and accurate detection is the only panacea. Delay in diagnosis of this disease causes 60% of deaths. Thermal imaging is a low-risk modality for early breast cancer decision making without injecting any form of energy into the human body. Thermography as a screening tool was first introduced and well accepted in 1956. However, a study in 1977 found that it lagged behind other screening tools and is subjective. Soon after, its use was discontinued. This review discusses various screening tools used to detect breast cancer with a focus on thermography along with their advantages and shortcomings. With the maturation of thermography equipment and technological advances, this technique is emerging and has become the refocus of many biomedical researchers across the globe in the past decade. This study dispenses an exhaustive review of the work done related to interpretation of breast thermal variations and confers the discipline, frameworks, and methodologies used by different authors to diagnose breast cancer. Different performance metrics like accuracy, specificity, and sensitivity have also been examined. This paper outlines the most pressing research gaps for future work to improvise the accuracy of results for diagnosis of breast abnormalities using image processing tools, mathematical modelling and artificial intelligence. However, supplementary research is needed to affirm the potential of this technology for predicting breast cancer risk effectively. Altogether, our findings inform that it is a promising research problem and a potential solution for early detection of breast cancer in younger women.

Hakim Aayesha, Awale R N

2020-Jul-01

Breast cancer, Breast thermogram, Computer-Assisted image processing, Infrared imaging, Thermal imaging, Thermograph

Radiology Radiology

Stratification of cystic renal masses into benign and potentially malignant: applying machine learning to the bosniak classification.

In Abdominal radiology (New York)

PURPOSE : To create a CT texture-based machine learning algorithm that distinguishes benign from potentially malignant cystic renal masses as defined by the Bosniak Classification version 2019.

METHODS : In this IRB-approved, HIPAA-compliant study, 4,454 adult patients underwent renal mass protocol CT or CT urography from January 2011 to June 2018. Of these, 257 cystic renal masses were included in the final study cohort. Each mass was independently classified using Bosniak version 2019 by three radiologists, resulting in 185 benign (Bosniak I or II) and 72 potentially malignant (Bosniak IIF, III or IV) masses. Six texture features: mean, standard deviation, mean of positive pixels, entropy, skewness, kurtosis were extracted using commercial software TexRAD (Feedback PLC, Cambridge, UK). Random forest (RF), logistic regression (LR), and support vector machine (SVM) machine learning algorithms were implemented to classify cystic renal masses into the two groups and tested with tenfold cross validations.

RESULTS : Higher mean, standard deviation, mean of positive pixels, entropy, skewness were statistically associated with the potentially malignant group (P ≤ 0.0015 each). Sensitivity, specificity, positive predictive value, negative predictive value, and area under curve of RF model was 0.67, 0.91, 0.75, 0.88, 0.88; of LR model was 0.63, 0.93, 0.78, 0.86, 0.90, and of SVM model was 0.56, 0.91, 0.71, 0.84, 0.89, respectively.

CONCLUSION : Three CT texture-based machine learning algorithms demonstrated high discriminatory capability in distinguishing benign from potentially malignant cystic renal masses as defined by the Bosniak Classification version 2019. If validated, CT texture-based machine learning algorithms may help reduce interreader variability when applying the Bosniak classification.

Miskin Nityanand, Qin Lei, Matalon Shanna A, Tirumani Sree H, Alessandrino Francesco, Silverman Stuart G, Shinagare Atul B

2020-Jul-01

Bosniak classification v2019, Cystic renal mass, Machine learning, Texture analysis

Surgery Surgery

Artificial intelligence in gastric cancer: a systematic review.

In Journal of cancer research and clinical oncology

OBJECTIVE : This study aims to systematically review the application of artificial intelligence (AI) techniques in gastric cancer and to discuss the potential limitations and future directions of AI in gastric cancer.

METHODS : A systematic review was performed that follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Pubmed, EMBASE, the Web of Science, and the Cochrane Library were used to search for gastric cancer publications with an emphasis on AI that were published up to June 2020. The terms "artificial intelligence" and "gastric cancer" were used to search for the publications.

RESULTS : A total of 64 articles were included in this review. In gastric cancer, AI is mainly used for molecular bio-information analysis, endoscopic detection for Helicobacter pylori infection, chronic atrophic gastritis, early gastric cancer, invasion depth, and pathology recognition. AI may also be used to establish predictive models for evaluating lymph node metastasis, response to drug treatments, and prognosis. In addition, AI can be used for surgical training, skill assessment, and surgery guidance.

CONCLUSIONS : In the foreseeable future, AI applications can play an important role in gastric cancer management in the era of precision medicine.

Jin Peng, Ji Xiaoyan, Kang Wenzhe, Li Yang, Liu Hao, Ma Fuhai, Ma Shuai, Hu Haitao, Li Weikun, Tian Yantao

2020-Jul-01

Artificial intelligence, Cancer management, Diagnosis, Gastric cancer, Treatment

General General

An adaptive design approach for defects distribution modeling in materials from first-principle calculations.

In Journal of molecular modeling

Designing and understanding the mechanism of non-stoichiometric materials with enhanced properties is challenging, both experimentally and even computationally, due to the large number of chemical spaces and their distributions through the material. In the current work, it is proposed a Machine Learning approach coupled with the Efficient Global Optimization (EGO) method-an Adaptive Design (AD)-to model local defects in materials from first-principle calculations. Our method takes into account the smallest sample set as possible, envisioning the material defect structure relationship with target properties for new insights. As an example, the AD framework allows us to study the stability and the structure of the modified goethite (Fe0.875Al0.125OOH) by considering a proper defect distribution, from first-principle calculations. The chemical space search for the modified goethite was evaluated by starting from different sizes and configurations of the samples as well as different surrogate models (ANN and Gaussian Process; GP), acquisition functions, and descriptors. Our results show that the same local solution of several defect arrangements in Fe0.875Al0.125OOH is found regardless of the initial sample and regression model. This indicates the efficiency of our search method. We also discuss the role of the descriptors in the accelerated global search for defects in material modeling. We conclude that the AD method applied in material defects is a successful approach in automating the search within huge chemical spaces from first-principle calculations by considering small samples. This method can be applied to mechanistic elucidation of non-stoichiometric materials, solid solutions, alloys, and Schottky and Frenkel defects, essential for material design and discovery. Graphical abstract.

Lourenço Maicon Pierre, Dos Santos Anastácio Alexandre, Rosa Andreia L, Frauenheim Thomas, da Silva Maurício Chagas

2020-Jul-01

Adaptive Design, DFT, Efficient Global Optimization, Machine Learning, Material defect modeling

Radiology Radiology

Feasibility of new fat suppression for breast MRI using pix2pix.

In Japanese journal of radiology

PURPOSE : To generate and evaluate fat-saturated T1-weighted (FST1W) image synthesis of breast magnetic resonance imaging (MRI) using pix2pix.

MATERIALS AND METHODS : We collected pairs of noncontrast-enhanced T1-weighted an FST1W images of breast MRI for training data (2112 pairs from 15 patients), validation data (428 pairs from three patients), and test data (90 pairs from 30 patients). From the original images, 90 synthetic images were generated with 50, 100, and 200 epochs using pix2pix. Two breast radiologists evaluated the synthetic images (from 1 = excellent to 5 = very poor) for quality of fat suppression, anatomic structures, artifacts, etc. The average score was analyzed for each epoch and breast density.

RESULTS : The synthetic images were scored from 2.95 to 3.60; the best was reduction in artifacts when using 100 epochs. The average overall quality scores for fat suppression were 3.63 at 50 epochs, 3.24 at 100 epochs, and 3.12 at 200 epochs. In the analysis for breast density, each score was significantly better for nondense breasts than for dense breasts; the average score was 2.88-3.18 for nondense breasts and 3.03-3.42 for dense breasts (P = 0.000-0.042).

CONCLUSION : Pix2pix had the potential to generate FST1W synthesis for breast MRI.

Mori Mio, Fujioka Tomoyuki, Katsuta Leona, Kikuchi Yuka, Oda Goshi, Nakagawa Tsuyoshi, Kitazume Yoshio, Kubota Kazunori, Tateishi Ukihide

2020-Jul-01

Breast imaging, Deep learning, Generative adversarial networks, Magnetic resonance imaging, Pix2pix

General General

Using deep neural networks and biological subwords to detect protein S-sulfenylation sites.

In Briefings in bioinformatics

Protein S-sulfenylation is one kind of crucial post-translational modifications (PTMs) in which the hydroxyl group covalently binds to the thiol of cysteine. Some recent studies have shown that this modification plays an important role in signaling transduction, transcriptional regulation and apoptosis. To date, the dynamic of sulfenic acids in proteins remains unclear because of its fleeting nature. Identifying S-sulfenylation sites, therefore, could be the key to decipher its mysterious structures and functions, which are important in cell biology and diseases. However, due to the lack of effective methods, scientists in this field tend to be limited in merely a handful of some wet lab techniques that are time-consuming and not cost-effective. Thus, this motivated us to develop an in silico model for detecting S-sulfenylation sites only from protein sequence information. In this study, protein sequences served as natural language sentences comprising biological subwords. The deep neural network was consequentially employed to perform classification. The performance statistics within the independent dataset including sensitivity, specificity, accuracy, Matthews correlation coefficient and area under the curve rates achieved 85.71%, 69.47%, 77.09%, 0.5554 and 0.833, respectively. Our results suggested that the proposed method (fastSulf-DNN) achieved excellent performance in predicting S-sulfenylation sites compared to other well-known tools on a benchmark dataset.

Do Duyen Thi, Le Thanh Quynh Trang, Le Nguyen Quoc Khanh

2020-Jul-02

deep learning, post-translational modification, protein function prediction, sulfenylation reaction, word embedding

General General

Hybrid machine learning architecture for automated detection and grading of retinal images for diabetic retinopathy.

In Journal of medical imaging (Bellingham, Wash.)

Purpose: Diabetic retinopathy is the leading cause of blindness, affecting over 93 million people. An automated clinical retinal screening process would be highly beneficial and provide a valuable second opinion for doctors worldwide. A computer-aided system to detect and grade the retinal images would enhance the workflow of endocrinologists. Approach: For this research, we make use of a publicly available dataset comprised of 3662 images. We present a hybrid machine learning architecture to detect and grade the level of diabetic retinopathy (DR) severity. We also present and compare simple transfer learning-based approaches using established networks such as AlexNet, VGG16, ResNet, Inception-v3, NASNet, DenseNet, and GoogLeNet for DR detection. For the grading stage (mild, moderate, proliferative, or severe), we present an approach of combining various convolutional neural networks with principal component analysis for dimensionality reduction and a support vector machine classifier. We study the performance of these networks under different preprocessing conditions. Results: We compare these results with various existing state-of-the-art approaches, which include single-stage architectures. We demonstrate that this architecture is more robust to limited training data and class imbalance. We achieve an accuracy of 98.4% for DR detection and an accuracy of 96.3% for distinguishing severity of DR, thereby setting a benchmark for future research efforts using a limited set of training images. Conclusions: Results obtained using the proposed approach serve as a benchmark for future research efforts. We demonstrate as a proof-of-concept that an automated detection and grading system could be developed with a limited set of images and labels. This type of independent architecture for detection and grading could be used in areas with a scarcity of trained clinicians based on the necessity.

Narayanan Barath Narayanan, Hardie Russell C, De Silva Manawaduge Supun, Kueterman Nathaniel K

2020-May

computer-aided detection, convolutional neural networks, diabetic retinopathy, endocrinology, principal component analysis, support vector machine

oncology Oncology

A Radiosensitivity Gene Signature and XPO1 Predict Clinical Outcomes for Glioma Patients.

In Frontiers in oncology

Objective: Glioma is the most common and fatal primary brain tumor that has a high risk of recurrence in adults. Identification of predictive biomarkers is necessary to optimize therapeutic strategies. This study investigated the predictive efficacy of a previously identified radiosensitivity signature as well as Exportin 1 (XPO1) expression levels. Methods: A total of 1,552 patients diagnosed with glioma were analyzed using the Chinese Glioma Genome Atlas and The Cancer Genome Atlas databases. The radiosensitive and radioresistant groups were identified based on a radiosensitivity signature. Patients were also stratified into XPO1-high and XPO1-low groups based on XPO1 mRNA expression levels. Overall survival rates were compared across patient groups. Differential gene expression was detected and analyzed through pathway enrichment and Gene Set Enrichment Analysis (GSEA). To predict 1-, 3-, and 5-years survival rates for glioma patients, a nomogram was established combining the radiosensitivity gene signature, XPO1 status, and clinical characteristics. An artificial intelligence clustering system and a survival prediction system of glioma were developed to predict individual risk. Results: This proposed classification based on a radiosensitivity gene signature and XPO1 expression levels provides an independent prognostic factor for glioma. The RR-XPO1-high group shows a poor prognosis and may benefit most from radiotherapy-combined anti-XPO1 treatment. The nomogram based on the radiosensitivity gene signature, XPO1 expression, and clinical characteristics performs more optimally compared to the WHO classification and IDH status in predicting survival rates for glioma patients. The online clustering and prediction systems make it accessible to predict risk and optimize treatment for a special patient. The cell cycle, p53, and focal adhesion pathways are associated with more invasive glioma cases. Conclusion: Combining the radiosensitivity signature and XPO1 expression is a favorable approach to predict outcomes as well as determine optimal therapeutic strategies for glioma patients.

Wu Shan, Qiao Qiao, Li Guang

2020

XPO1, glioma, nomogram, prognosis, radiosensitivity

General General

Y-Net: Hybrid deep learning image reconstruction for photoacoustic tomography in vivo.

In Photoacoustics

Conventional reconstruction algorithms (e.g., delay-and-sum) used in photoacoustic imaging (PAI) provide a fast solution while many artifacts remain, especially for limited-view with ill-posed problem. In this paper, we propose a new convolutional neural network (CNN) framework Y-Net: a CNN architecture to reconstruct the initial PA pressure distribution by optimizing both raw data and beamformed images once. The network combines two encoders with one decoder path, which optimally utilizes more information from raw data and beamformed image. We compared our result with some ablation studies, and the results of the test set show better performance compared with conventional reconstruction algorithms and other deep learning method (U-Net). Both in-vitro and in-vivo experiments are used to validated our method, which still performs better than other existing methods. The proposed Y-Net architecture also has high potential in medical image reconstruction for other imaging modalities beyond PAI.

Lan Hengrong, Jiang Daohuai, Yang Changchun, Gao Feng, Gao Fei

2020-Dec

Deep learning, Image reconstruction, Photoacoustic imaging

General General

Distributed event-triggered adaptive partial diffusion strategy under dynamic network topology.

In Chaos (Woodbury, N.Y.)

In wireless sensor networks, the dynamic network topology and the limitation of communication resources may lead to degradation of the estimation performance of distributed algorithms. To solve this problem, we propose an event-triggered adaptive partial diffusion least mean-square algorithm (ET-APDLMS). On the one hand, the adaptive partial diffusion strategy adapts to the dynamic topology of the network while ensuring the estimation performance. On the other hand, the event-triggered mechanism can effectively reduce the data redundancy and save the communication resources of the network. The communication cost analysis of the ET-APDLMS algorithm is given in the performance analysis. The theoretical results prove that the algorithm is asymptotically unbiased, and it converges in the mean sense and the mean-square sense. In the simulation, we compare the mean-square deviation performance of the ET-APDLMS algorithm and other different diffusion algorithms. The simulation results are consistent with the performance analysis, which verifies the effectiveness of the proposed algorithm.

Feng Minyu, Deng Shuwei, Chen Feng, Kurths Jürgen

2020-Jun

Radiology Radiology

Chest CT evaluation of 11 persistent asymptomatic patients with SARS-CoV-2 infection.

In Japanese journal of infectious diseases

Eleven asymptomatic carriers who received nasal or throat swab test for SARS-CoV-2 after close contacts with patients who developed symptomatic 2019 coronavirus disease (COVID-19) were enrolled in this study. The chest CT images of enrolled patients were analyzed qualitatively and quantitatively. There were 3 (27.3%) patients had normal first chest CT, two of which were under age of 15 years. Lesions in 2 (18.2%) patients involved one lobe with unifocal presence. Subpleural lesions were seen in 7 (63.6%) patients. Ground glass opacity (GGO) was the most common sign observed in 7 (63.6%) patients. Crazy-paving pattern and consolidation were detected in 2 (18.2%) and 4 (36.4%) cases, respectively. Based on deep learning quantitative analysis, volume of intrapulmonary lesions on first CT scans was 85.73±84.46 cm3. In patients with positive findings on CT images, average interval days between positive real-time reverse transcriptase polymerase chain reaction assay and peak volume on CT images were 5.1±3.1 days. In conclusion, typical CT findings can be detected in over 70% of asymptomatic SARS-CoV-2 carriers. It mainly starts as GGO along subpleural regions and bronchi, and absorbs in nearly 5 days.

Yan Shuo, Chen Hui, Xie Ru-Ming, Guan Chun-Shuang, Xue Ming, Lv Zhi-Bin, Wei Lian-Gui, Bai Yan, Chen Bu-Dong

2020-Jun-30

asymptomatic, coronavirus, multidetector computed tomography, pneumonia

Radiology Radiology

Gap-filling method for suppressing grating lobes in ultrasound imaging: Experimental study with deep-learning approach.

In IEEE access : practical innovations, open solutions

Sparse arrays reduce the number of active channels that effectively increases the inter-element spacing. Large inter-element spacing results in grating lobe artifacts degrading the ultrasound image quality and reducing the contrast-to-noise ratio. A deep learning-based custom algorithm is proposed to estimate inactive channel data in periodic sparse arrays. The algorithm uses data from multiple active channels to estimate inactive channels. The estimated inactive channel data effectively reduces the inter-element spacing for beamforming, thus suppressing the grating lobes. Estimated inactive element channel data was combined with active element channel data resulting in a pseudo fully sampled array. The channel data was beamformed using a simple delay-and-sum method and compared with the sparse array and fully sampled array. The performance of the algorithm was validated using a wire target in a water tank, multi-purpose tissue-mimicking phantom, and in-vivo carotid data. Grating lobes suppression up to 15.25 dB was observed with an increase in contrast-to-noise (CNR) for the pseudo fully sampled array. Hypoechoic regions showed more improvement in CNR than hyperechoic regions. Root-mean-square error for unwrapped phase between fully sampled array and the pseudo fully sampled array was low, making the estimated data suitable for Doppler and elastography applications. Speckle pattern was also preserved; thus, the estimated data can also be used for quantitative ultrasound applications. The algorithm can improve the quality of sparse array images and has applications in small scale ultrasound devices and 2D arrays.

Kumar Viksit, Lee Po-Yang, Kim Bae-Hyung, Fatemi Mostafa, Alizad Azra

2020

convolutional neural networks, deep learning, gap-filling, sparse array, ultrasound imaging

General General

Current status and future directions of high-throughput ADME screening in drug discovery.

In Journal of pharmaceutical analysis

During the last decade high-throughput in vitro absorption, distribution, metabolism and excretion (HT-ADME) screening has become an essential part of any drug discovery effort of synthetic molecules. The conduct of HT-ADME screening has been "industrialized" due to the extensive development of software and automation tools in cell culture, assay incubation, sample analysis and data analysis. The HT-ADME assay portfolio continues to expand in emerging areas such as drug-transporter interactions, early soft spot identification, and ADME screening of peptide drug candidates. Additionally, thanks to the very large and high-quality HT-ADME data sets available in many biopharma companies, in silico prediction of ADME properties using machine learning has also gained much momentum in recent years. In this review, we discuss the current state-of-the-art practices in HT-ADME screening including assay portfolio, assay automation, sample analysis, data processing, and prediction model building. In addition, we also offer perspectives in future development of this exciting field.

Shou Wilson Z

2020-Jun

Acoustic ejection mass spectrometry, Automation, Bioanalysis, HT-ADME, In vitro, Mass spectrometry

General General

Telemedicine, Artificial Intelligence and Humanisation of Clinical Pathways in Heart Failure Management: Back to the Future and Beyond.

In Cardiac failure review

New technologies have been recently introduced to improve the monitoring of patients with chronic syndromes such as heart failure. Devices can now be employed to gather large amounts of data and data processing through artificial intelligence techniques may improve heart failure management and reduce costs. The analysis of large datasets using an artificial intelligence technique is leading to a paradigm shift in the era of precision medicine. However, the assessment of clinical safety and the evaluation of the potential benefits is still a matter of debate. In this article, the authors aim to focus on the development of these new tools and to draw the attention to their transition in daily clinical practice.

D’Amario Domenico, Canonico Francesco, Rodolico Daniele, Borovac Josip A, Vergallo Rocco, Montone Rocco Antonio, Galli Mattia, Migliaro Stefano, Restivo Attilio, Massetti Massimo, Crea Filippo

2020-Mar

Artificial intelligence, big data, data analysis, devices, heart failure, patient monitoring, personalised medicine, telemedicine

General General

Seq-ing answers: Current data integration approaches to uncover mechanisms of transcriptional regulation.

In Computational and structural biotechnology journal

Advancements in the field of next generation sequencing lead to the generation of ever-more data, with the challenge often being how to combine and reconcile results from different OMICs studies such as genome, epigenome and transcriptome. Here we provide an overview of the standard processing pipelines for ChIP-seq and RNA-seq as well as common downstream analyses. We describe popular multi-omics data integration approaches used to identify target genes and co-factors, and we discuss how machine learning techniques may predict transcriptional regulators and gene expression.

Höllbacher Barbara, Balázs Kinga, Heinig Matthias, Uhlenhaut N Henriette

2020

ChIP-seq, Data integration, Multi-omics, NGS, RNA-seq, Transcriptional regulation

General General

Deep learning methods in protein structure prediction.

In Computational and structural biotechnology journal

Protein Structure Prediction is a central topic in Structural Bioinformatics. Since the '60s statistical methods, followed by increasingly complex Machine Learning and recently Deep Learning methods, have been employed to predict protein structural information at various levels of detail. In this review, we briefly introduce the problem of protein structure prediction and essential elements of Deep Learning (such as Convolutional Neural Networks, Recurrent Neural Networks and basic feed-forward Neural Networks they are founded on), after which we discuss the evolution of predictive methods for one-dimensional and two-dimensional Protein Structure Annotations, from the simple statistical methods of the early days, to the computationally intensive highly-sophisticated Deep Learning algorithms of the last decade. In the process, we review the growth of the databases these algorithms are based on, and how this has impacted our ability to leverage knowledge about evolution and co-evolution to achieve improved predictions. We conclude this review outlining the current role of Deep Learning techniques within the wider pipelines to predict protein structures and trying to anticipate what challenges and opportunities may arise next.

Torrisi Mirko, Pollastri Gianluca, Le Quan

2020

Deep learning, Machine learning, Protein structure prediction

General General

Deep learning predicts microbial interactions from self-organized spatiotemporal patterns.

In Computational and structural biotechnology journal

Microbial communities organize into spatial patterns that are largely governed by interspecies interactions. This phenomenon is an important metric for understanding community functional dynamics, yet the use of spatial patterns for predicting microbial interactions is currently lacking. Here we propose supervised deep learning as a new tool for network inference. An agent-based model was used to simulate the spatiotemporal evolution of two interacting organisms under diverse growth and interaction scenarios, the data of which was subsequently used to train deep neural networks. For small-size domains (100 µm × 100 µm) over which interaction coefficients are assumed to be invariant, we obtained fairly accurate predictions, as indicated by an average R2 value of 0.84. In application to relatively larger domains (450 µm × 450 µm) where interaction coefficients are varying in space, deep learning models correctly predicted spatial distributions of interaction coefficients without any additional training. Lastly, we evaluated our model against real biological data obtained using Pseudomonas fluorescens and Escherichia coli co-cultures treated with polymeric chitin or N-acetylglucosamine, the hydrolysis product of chitin. While P. fluorescens can utilize both substrates for growth, E. coli lacked the ability to degrade chitin. Consistent with our expectations, our model predicted context-dependent interactions across two substrates, i.e., degrader-cheater relationship on chitin polymers and competition on monomers. The combined use of the agent-based model and machine learning algorithm successfully demonstrates how to infer microbial interactions from spatially distributed data, presenting itself as a useful tool for the analysis of more complex microbial community interactions.

Lee Joon-Yong, Sadler Natalie C, Egbert Robert G, Anderton Christopher R, Hofmockel Kirsten S, Jansson Janet K, Song Hyun-Seob

2020

Agent-based modeling, Machine learning, Microscopy imaging technology, Network inference, Soil microbiomes

General General

Tomato Diseases and Pests Detection Based on Improved Yolo V3 Convolutional Neural Network.

In Frontiers in plant science

Tomato is affected by various diseases and pests during its growth process. If the control is not timely, it will lead to yield reduction or even crop failure. How to control the diseases and pests effectively and help the vegetable farmers to improve the yield of tomato is very important, and the most important thing is to accurately identify the diseases and insect pests. Compared with the traditional pattern recognition method, the diseases and pests recognition method based on deep learning can directly input the original image. Instead of the tedious steps such as image preprocessing, feature extraction and feature classification in the traditional method, the end-to-end structure is adopted to simplify the recognition process and solve the problem that the feature extractor designed manually is difficult to obtain the feature expression closest to the natural attribute of the object. Based on the application of deep learning object detection, not only can save time and effort, but also can achieve real-time judgment, greatly reduce the huge loss caused by diseases and pests, which has important research value and significance. Based on the latest research results of detection theory based on deep learning object detection and the characteristics of tomato diseases and pests images, this study will build the dataset of tomato diseases and pests under the real natural environment, optimize the feature layer of Yolo V3 model by using image pyramid to achieve multi-scale feature detection, improve the detection accuracy and speed of Yolo V3 model, and detect the location and category of diseases and pests of tomato accurately and quickly. Through the above research, the key technology of tomato pest image recognition in natural environment is broken through, which provides reference for intelligent recognition and engineering application of plant diseases and pests detection.

Liu Jun, Wang Xuewei

2020

K-means, deep learning, multiscale training, object detection, small object

General General

Cognitive Models in Cybersecurity: Learning From Expert Analysts and Predicting Attacker Behavior.

In Frontiers in psychology ; h5-index 92.0

Cybersecurity stands to benefit greatly from models able to generate predictions of attacker and defender behavior. On the defender side, there is promising research suggesting that Symbolic Deep Learning (SDL) may be employed to automatically construct cognitive models of expert behavior based on small samples of expert decisions. Such models could then be employed to provide decision support for non-expert users in the form of explainable expert-based suggestions. On the attacker side, there is promising research suggesting that model-tracing with dynamic parameter fitting may be used to automatically construct models during live attack scenarios, and to predict individual attacker preferences. Predicted attacker preferences could then be exploited for mitigating risk of successful attacks. In this paper we examine how these two cognitive modeling approaches may be useful for cybersecurity professionals via two human experiments. In the first experiment participants play the role of cyber analysts performing a task based on Intrusion Detection System alert elevation. Experiment results and analysis reveal that SDL can help to reduce missed threats by 25%. In the second experiment participants play the role of attackers picking among four attack strategies. Experiment results and analysis reveal that model-tracing with dynamic parameter fitting can be used to predict (and exploit) most attackers' preferences 40-70% of the time. We conclude that studies and models of human cognition are highly valuable for advancing cybersecurity.

Veksler Vladislav D, Buchler Norbou, LaFleur Claire G, Yu Michael S, Lebiere Christian, Gonzalez Cleotilde

2020

XAI (eXplainable Artificial Intelligence), behavioral simulations, cognitive modeling, cyber-security, decision support, deep learning, human-agent teaming, reinforcement learning

General General

Learning Fixed Points in Generative Adversarial Networks: From Image-to-Image Translation to Disease Detection and Localization.

In Proceedings. IEEE International Conference on Computer Vision

Generative adversarial networks (GANs) have ushered in a revolution in image-to-image translation. The development and proliferation of GANs raises an interesting question: can we train a GAN to remove an object, if present, from an image while otherwise preserving the image? Specifically, can a GAN "virtually heal" anyone by turning his medical image, with an unknown health status (diseased or healthy), into a healthy one, so that diseased regions could be revealed by subtracting those two images? Such a task requires a GAN to identify a minimal subset of target pixels for domain translation, an ability that we call fixed-point translation, which no GAN is equipped with yet. Therefore, we propose a new GAN, called Fixed-Point GAN, trained by (1) supervising same-domain translation through a conditional identity loss, and (2) regularizing cross-domain translation through revised adversarial, domain classification, and cycle consistency loss. Based on fixed-point translation, we further derive a novel framework for disease detection and localization using only image-level annotation. Qualitative and quantitative evaluations demonstrate that the proposed method outperforms the state of the art in multi-domain image-to-image translation and that it surpasses predominant weakly-supervised localization methods in both disease detection and localization. Implementation is available at https://github.com/jlianglab/Fixed-Point-GAN.

Siddiquee Md Mahfuzur Rahman, Zhou Zongwei, Tajbakhsh Nima, Feng Ruibin, Gotway Michael B, Bengio Yoshua, Liang Jianming

2019-Nov

Surgery Surgery

The major effects of health-related quality of life on 5-year survival prediction among lung cancer survivors: applications of machine learning.

In Scientific reports ; h5-index 158.0

The primary goal of this study was to evaluate the major roles of health-related quality of life (HRQOL) in a 5-year lung cancer survival prediction model using machine learning techniques (MLTs). The predictive performances of the models were compared with data from 809 survivors who underwent lung cancer surgery. Each of the modeling technique was applied to two feature sets: feature set 1 included clinical and sociodemographic variables, and feature set 2 added HRQOL factors to the variables from feature set 1. One of each developed prediction model was trained with the decision tree (DT), logistic regression (LR), bagging, random forest (RF), and adaptive boosting (AdaBoost) methods, and then, the best algorithm for modeling was determined. The models' performances were compared using fivefold cross-validation. For feature set 1, there were no significant differences in model accuracies (ranging from 0.647 to 0.713). Among the models in feature set 2, the AdaBoost and RF models outperformed the other prognostic models [area under the curve (AUC) = 0.850, 0.898, 0.981, 0.966, and 0.949 for the DT, LR, bagging, RF and AdaBoost models, respectively] in the test set. Overall, 5-year disease-free lung cancer survival prediction models with MLTs that included HRQOL as well as clinical variables improved predictive performance.

Sim Jin-Ah, Kim Young Ae, Kim Ju Han, Lee Jong Mog, Kim Moon Soo, Shim Young Mog, Zo Jae Ill, Yun Young Ho

2020-Jul-01

General General

Rate-induced transitions and advanced takeoff in power systems.

In Chaos (Woodbury, N.Y.)

One of the most common causes of failures in complex systems in nature or engineering is an abrupt transition from a stable to an alternate stable state. Such transitions cause failures in the dynamic power systems. We focus on this transition from a stable to an unstable manifold for a rate-dependent mechanical power input via a numerical investigation in a theoretical power system model. Our studies uncover early transitions that depend on the rate of variation of mechanical input. Furthermore, we determine the dependency of a critical rate on initial conditions of the system. Accordingly, this knowledge of the critical rate can be used in devising an effective control strategy based on artificial intelligence (AI).

Suchithra K S, Gopalakrishnan E A, Surovyatkina Elena, Kurths Jürgen

2020-Jun

Ophthalmology Ophthalmology

COVID-19 pandemic from an ophthalmology point of view.

In The Indian journal of medical research

Coronavirus disease 2019 (COVID-19) is caused by a highly contagious RNA virus termed as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Ophthalmologists are at high-risk due to their proximity and short working distance at the time of slit-lamp examination. Eye care professionals can be caught unaware because conjunctivitis may be one of the first signs of COVID-19 at presentation, even precluding the emergence of additional symptoms such as dry cough and anosmia. Breath and eye shields as well as N95 masks, should be worn while examining patients with fever, breathlessness, or any history of international travel or travel from any hotspot besides maintaining hand hygiene. All elective surgeries need to be deferred. Adults or children with sudden-onset painful or painless visual loss, or sudden-onset squint, or sudden-onset floaters or severe lid oedema need a referral for urgent care. Patients should be told to discontinue contact lens wear if they have any symptoms of COVID-19. Cornea retrieval should be avoided in confirmed cases and suspects, and long-term preservation medium for storage of corneas should be encouraged. Retinal screening is unnecessary for coronavirus patients taking chloroquine or hydroxychloroquine as the probability of toxic damage to the retina is less due to short-duration of drug therapy. Tele-ophthalmology and artificial intelligence should be preferred for increasing doctor-patient interaction.

Gupta Parul Chawla, Kumar M Praveen, Ram Jagat

2020-May

Chloroquine - contact lens - coronavirus - eye donation - eye shields - hydroxychloroquine - ophthalmologist

Pathology Pathology

Anatomy-Aware Siamese Network: Exploiting Semantic Asymmetry for Accurate Pelvic Fracture Detection in X-ray Images

ArXiv Preprint

Visual cues of enforcing bilaterally symmetric anatomies as normal findings are widely used in clinical practice to disambiguate subtle abnormalities from medical images. So far, inadequate research attention has been received on effectively emulating this practice in CAD methods. In this work, we exploit semantic anatomical symmetry or asymmetry analysis in a complex CAD scenario, i.e., anterior pelvic fracture detection in trauma PXRs, where semantically pathological (refer to as fracture) and non-pathological (e.g., pose) asymmetries both occur. Visually subtle yet pathologically critical fracture sites can be missed even by experienced clinicians, when limited diagnosis time is permitted in emergency care. We propose a novel fracture detection framework that builds upon a Siamese network enhanced with a spatial transformer layer to holistically analyze symmetric image features. Image features are spatially formatted to encode bilaterally symmetric anatomies. A new contrastive feature learning component in our Siamese network is designed to optimize the deep image features being more salient corresponding to the underlying semantic asymmetries (caused by pelvic fracture occurrences). Our proposed method have been extensively evaluated on 2,359 PXRs from unique patients (the largest study to-date), and report an area under ROC curve score of 0.9771. This is the highest among state-of-the-art fracture detection methods, with improved clinical indications.

Haomin Chen, Yirui Wang, Kang Zheng, Weijian Li, Chi-Tung Cheng, Adam P. Harrison, Jing Xiao, Gregory D. Hager, Le Lu, Chien-Hung Liao, Shun Miao

2020-07-03

oncology Oncology

Genome-wide investigation of gene-cancer associations for the prediction of novel therapeutic targets in oncology.

In Scientific reports ; h5-index 158.0

A major cause of failed drug discovery programs is suboptimal target selection, resulting in the development of drug candidates that are potent inhibitors, but ineffective at treating the disease. In the genomics era, the availability of large biomedical datasets with genome-wide readouts has the potential to transform target selection and validation. In this study we investigate how computational intelligence methods can be applied to predict novel therapeutic targets in oncology. We compared different machine learning classifiers applied to the task of drug target classification for nine different human cancer types. For each cancer type, a set of "known" target genes was obtained and equally-sized sets of "non-targets" were sampled multiple times from the human protein-coding genes. Models were trained on mutation, gene expression (TCGA), and gene essentiality (DepMap) data. In addition, we generated a numerical embedding of the interaction network of protein-coding genes using deep network representation learning and included the results in the modeling. We assessed feature importance using a random forests classifier and performed feature selection based on measuring permutation importance against a null distribution. Our best models achieved good generalization performance based on the AUROC metric. With the best model for each cancer type, we ran predictions on more than 15,000 protein-coding genes to identify potential novel targets. Our results indicate that this approach may be useful to inform early stages of the drug discovery pipeline.

Bazaga Adrián, Leggate Dan, Weisser Hendrik

2020-Jul-01

General General

Domain-specific cues improve robustness of deep learning-based segmentation of CT volumes.

In Scientific reports ; h5-index 158.0

Machine learning has considerably improved medical image analysis in the past years. Although data-driven approaches are intrinsically adaptive and thus, generic, they often do not perform the same way on data from different imaging modalities. In particular computed tomography (CT) data poses many challenges to medical image segmentation based on convolutional neural networks (CNNs), mostly due to the broad dynamic range of intensities and the varying number of recorded slices of CT volumes. In this paper, we address these issues with a framework that adds domain-specific data preprocessing and augmentation to state-of-the-art CNN architectures. Our major focus is to stabilise the prediction performance over samples as a mandatory requirement for use in automated and semi-automated workflows in the clinical environment. To validate the architecture-independent effects of our approach we compare a neural architecture based on dilated convolutions for parallel multi-scale processing (a modified Mixed-Scale Dense Network: MS-D Net) to traditional scaling operations (a modified U-Net). Finally, we show that an ensemble model combines the strengths across different individual methods. Our framework is simple to implement into existing deep learning pipelines for CT analysis. It performs well on a range of tasks such as liver and kidney segmentation, without significant differences in prediction performance on strongly differing volume sizes and varying slice thickness. Thus our framework is an essential step towards performing robust segmentation of unknown real-world samples.

Kloenne Marie, Niehaus Sebastian, Lampe Leonie, Merola Alberto, Reinelt Janis, Roeder Ingo, Scherf Nico

2020-Jul-01

General General

Unsupervised Quantum Gate Control for Gate-Model Quantum Computers.

In Scientific reports ; h5-index 158.0

In near-term quantum computers, the operations are realized by unitary quantum gates. The precise and stable working mechanism of quantum gates is essential for the implementation of any complex quantum computations. Here, we define a method for the unsupervised control of quantum gates in near-term quantum computers. We model a scenario in which a tensor product structure of non-stable quantum gates is not controllable in terms of control theory. We prove that the non-stable quantum gate becomes controllable via a machine learning method if the quantum gates formulate an entangled gate structure.

Gyongyosi Laszlo

2020-Jul-01

Radiology Radiology

Chest CT evaluation of 11 persistent asymptomatic patients with SARS-CoV-2 infection.

In Japanese journal of infectious diseases

Eleven asymptomatic carriers who received nasal or throat swab test for SARS-CoV-2 after close contacts with patients who developed symptomatic 2019 coronavirus disease (COVID-19) were enrolled in this study. The chest CT images of enrolled patients were analyzed qualitatively and quantitatively. There were 3 (27.3%) patients had normal first chest CT, two of which were under age of 15 years. Lesions in 2 (18.2%) patients involved one lobe with unifocal presence. Subpleural lesions were seen in 7 (63.6%) patients. Ground glass opacity (GGO) was the most common sign observed in 7 (63.6%) patients. Crazy-paving pattern and consolidation were detected in 2 (18.2%) and 4 (36.4%) cases, respectively. Based on deep learning quantitative analysis, volume of intrapulmonary lesions on first CT scans was 85.73±84.46 cm3. In patients with positive findings on CT images, average interval days between positive real-time reverse transcriptase polymerase chain reaction assay and peak volume on CT images were 5.1±3.1 days. In conclusion, typical CT findings can be detected in over 70% of asymptomatic SARS-CoV-2 carriers. It mainly starts as GGO along subpleural regions and bronchi, and absorbs in nearly 5 days.

Yan Shuo, Chen Hui, Xie Ru-Ming, Guan Chun-Shuang, Xue Ming, Lv Zhi-Bin, Wei Lian-Gui, Bai Yan, Chen Bu-Dong

2020-Jun-30

asymptomatic, coronavirus, multidetector computed tomography, pneumonia

General General

Reconstruction of Compressed-sensing MR Imaging Using Deep Residual Learning in the Image Domain.

In Magnetic resonance in medical sciences : MRMS : an official journal of Japan Society of Magnetic Resonance in Medicine

PURPOSE : A deep residual learning convolutional neural network (DRL-CNN) was applied to improve image quality and speed up the reconstruction of compressed sensing magnetic resonance imaging. The reconstruction performances of the proposed method was compared with iterative reconstruction methods.

METHODS : The proposed method adopted a DRL-CNN to learn the residual component between the input and output images (i.e., aliasing artifacts) for image reconstruction. The CNN-based reconstruction was compared with iterative reconstruction methods. To clarify the reconstruction performance of the proposed method, reconstruction experiments using 1D-, 2D-random under-sampling and sampling patterns that mix random and non-random under-sampling were executed. The peak-signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) were examined for various numbers of training images, sampling rates, and numbers of training epochs.

RESULTS : The experimental results demonstrated that reconstruction time is drastically reduced to 0.022 s per image compared with that for conventional iterative reconstruction. The PSNR and SSIM were improved as the coherence of the sampling pattern increases. These results indicate that a deep CNN can learn coherent artifacts and is effective especially for cases where the randomness of k-space sampling is rather low. Simulation studies showed that variable density non-random under-sampling was a promising sampling pattern in 1D-random under-sampling of 2D image acquisition.

CONCLUSION : A DRL-CNN can recognize and predict aliasing artifacts with low incoherence. It was demonstrated that reconstruction time is significantly reduced and the improvement in the PSNR and SSIM is higher in 1D-random under-sampling than in 2D. The requirement of incoherence for aliasing artifacts is different from that for iterative reconstruction.

Ouchi Shohei, Ito Satoshi

2020-Jul-02

compressed sensing, deep learning, reconstruction

Ophthalmology Ophthalmology

COVID-19 pandemic from an ophthalmology point of view.

In The Indian journal of medical research

Coronavirus disease 2019 (COVID-19) is caused by a highly contagious RNA virus termed as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Ophthalmologists are at high-risk due to their proximity and short working distance at the time of slit-lamp examination. Eye care professionals can be caught unaware because conjunctivitis may be one of the first signs of COVID-19 at presentation, even precluding the emergence of additional symptoms such as dry cough and anosmia. Breath and eye shields as well as N95 masks, should be worn while examining patients with fever, breathlessness, or any history of international travel or travel from any hotspot besides maintaining hand hygiene. All elective surgeries need to be deferred. Adults or children with sudden-onset painful or painless visual loss, or sudden-onset squint, or sudden-onset floaters or severe lid oedema need a referral for urgent care. Patients should be told to discontinue contact lens wear if they have any symptoms of COVID-19. Cornea retrieval should be avoided in confirmed cases and suspects, and long-term preservation medium for storage of corneas should be encouraged. Retinal screening is unnecessary for coronavirus patients taking chloroquine or hydroxychloroquine as the probability of toxic damage to the retina is less due to short-duration of drug therapy. Tele-ophthalmology and artificial intelligence should be preferred for increasing doctor-patient interaction.

Gupta Parul Chawla, Kumar M Praveen, Ram Jagat

2020-May

Chloroquine - contact lens - coronavirus - eye donation - eye shields - hydroxychloroquine - ophthalmologist

General General

Circulating transcripts in maternal blood reflect a molecular signature of early-onset preeclampsia.

In Science translational medicine ; h5-index 138.0

Circulating RNA (C-RNA) is continually released into the bloodstream from tissues throughout the body, offering an opportunity to noninvasively monitor all aspects of pregnancy health from conception to birth. We asked whether C-RNA analysis could robustly detect aberrations in patients diagnosed with preeclampsia (PE), a prevalent and potentially fatal pregnancy complication. As an initial examination, we sequenced the circulating transcriptome from 40 pregnancies at the time of severe, early-onset PE diagnosis and 73 gestational age-matched controls. Differential expression analysis identified 30 transcripts with gene ontology annotations and tissue expression patterns consistent with the placental dysfunction, impaired fetal development, and maternal immune and cardiovascular system dysregulation characteristic of PE. Furthermore, machine learning identified combinations of 49 C-RNA transcripts that classified an independent cohort of patients (early-onset PE, n = 12; control, n = 12) with 85 to 89% accuracy. C-RNA may thus hold promise for improving the diagnosis and identification of at-risk pregnancies.

Munchel Sarah, Rohrback Suzanne, Randise-Hinchliff Carlo, Kinnings Sarah, Deshmukh Shweta, Alla Nagesh, Tan Catherine, Kia Amirali, Greene Grainger, Leety Linda, Rhoa Matthew, Yeats Scott, Saul Matthew, Chou Julia, Bianco Kimberley, O’Shea Kevin, Bujold Emmanuel, Norwitz Errol, Wapner Ronald, Saade George, Kaper Fiona

2020-Jul-01

Radiology Radiology

Staying abreast of imaging - Current status of breast cancer detection in high density breast.

In Radiography (London, England : 1995)

OBJECTIVES : The aim of this paper is to illustrate the current status of imaging in high breast density as we enter a new decade of advancing medicine and technology to diagnose breast lesions.

KEY FINDINGS : Early detection of breast cancer has become the chief focus of research from governments to individuals. However, with varying breast densities across the globe, the explosion of breast density information related to imaging, phenotypes, diet, computer aided diagnosis and artificial intelligence has witnessed a dramatic shift in new screening recommendations in mammography, physical examination, screening younger women and women with comorbid conditions, screening women at high risk, and new screening technologies. Breast density is well known to be a risk factor in patients with suspected/known breast neoplasia. Extensive research in the field of qualitative and quantitative analysis on different tissue characteristics of the breast has rapidly become the chief focus of breast imaging. A summary of the available guidelines and modalities of breast imaging, as well as new emerging techniques under study that can potentially provide an augmentation or even a replacement of those currently available.

CONCLUSION : Despite all the advances in technology and all the research directed towards breast cancer, detection of breast cancer in dense breasts remains a dilemma.

IMPLICATIONS FOR PRACTICE : It is of utmost importance to develop highly sensitive screening modalities for early detection of breast cancer.

Ghieh D, Saade C, Najem E, El Zeghondi R, Rawashdeh M A, Berjawi G

2020-Jun-28

Breast cancer, Breast cancer screening, Breast density, Mammography

General General

Comparative RNA-Seq transcriptome analyses reveal dynamic time-dependent effects of 56Fe, 16O, and 28Si irradiation on the induction of murine hepatocellular carcinoma.

In BMC genomics ; h5-index 78.0

BACKGROUND : One of the health risks posed to astronauts during deep space flights is exposure to high charge, high-energy (HZE) ions (Z > 13), which can lead to the induction of hepatocellular carcinoma (HCC). However, little is known on the molecular mechanisms of HZE irradiation-induced HCC.

RESULTS : We performed comparative RNA-Seq transcriptomic analyses to assess the carcinogenic effects of 600 MeV/n 56Fe (0.2 Gy), 1 GeV/n 16O (0.2 Gy), and 350 MeV/n 28Si (0.2 Gy) ions in a mouse model for irradiation-induced HCC. C3H/HeNCrl mice were subjected to total body irradiation to simulate space environment HZE-irradiation, and liver tissues were extracted at five different time points post-irradiation to investigate the time-dependent carcinogenic response at the transcriptomic level. Our data demonstrated a clear difference in the biological effects of these HZE ions, particularly immunological, such as Acute Phase Response Signaling, B Cell Receptor Signaling, IL-8 Signaling, and ROS Production in Macrophages. Also seen in this study were novel unannotated transcripts that were significantly affected by HZE. To investigate the biological functions of these novel transcripts, we used a machine learning technique known as self-organizing maps (SOMs) to characterize the transcriptome expression profiles of 60 samples (45 HZE-irradiated, 15 non-irradiated control) from liver tissues. A handful of localized modules in the maps emerged as groups of co-regulated and co-expressed transcripts. The functional context of these modules was discovered using overrepresentation analysis. We found that these spots typically contained enriched populations of transcripts related to specific immunological molecular processes (e.g., Acute Phase Response Signaling, B Cell Receptor Signaling, IL-3 Signaling), and RNA Transcription/Expression.

CONCLUSIONS : A large number of transcripts were found differentially expressed post-HZE irradiation. These results provide valuable information for uncovering the differences in molecular mechanisms underlying HZE specific induced HCC carcinogenesis. Additionally, a handful of novel differentially expressed unannotated transcripts were discovered for each HZE ion. Taken together, these findings may provide a better understanding of biological mechanisms underlying risks for HCC after HZE irradiation and may also have important implications for the discovery of potential countermeasures against and identification of biomarkers for HZE-induced HCC.

Nia Anna M, Khanipov Kamil, Barnette Brooke L, Ullrich Robert L, Golovko George, Emmett Mark R

2020-Jul-01

Carcinogenesis, Novel transcripts, RNA-Sequencing, Self-organizing maps, Tumor microenvironment

General General

Hypernatremia at admission predicts poor survival in patients with terminal cancer: a retrospective cohort study.

In BMC palliative care ; h5-index 29.0

BACKGROUND : Although palliative care providers, patients, and their families rely heavily on accurate prognostication, the prognostic value of electrolyte imbalance has received little attention.

METHODS : As a retrospective review, we screened inpatients with terminal cancer admitted between January 2017 and May 2019 to a single hospice-palliative care unit. Clinical characteristics and laboratory results were obtained from medical records for multivariable Cox regression analysis of independent prognostic factors.

RESULTS : Of the 487 patients who qualified, 15 (3%) were hypernatremic upon admission. The median survival time was 26 days. Parameters associated with shortened survival included male sex, advanced age (> 70 years), lung cancer, poor performance status, elevated inflammatory markers, azotemia, impaired liver function, and hypernatremia. In a multivariable Cox proportional hazards model, male sex (hazard ratio [HR] = 1.53, 95% confidence interval [CI]: 1.15-2.04), poor performance status (HR = 1.45, 95% CI: 1.09-1.94), leukocytosis (HR = 1.98, 95% CI: 1.47-2.66), hypoalbuminemia (HR = 2.06, 95% CI: 1.49-2.73), and hypernatremia (HR = 1.55, 95% CI: 1.18-2.03) emerged as significant predictors of poor prognosis.

CONCLUSION : Hypernatremia may be a useful gauge of prognosis in patients with terminal cancer. Further large-scale prospective studies are needed to corroborate this finding.

Seo Min-Seok, Hwang In Cheol, Jung Jaehun, Lee Hwanhee, Choi Jae Hee, Shim Jae-Yong

2020-Jul-01

Electrolyte imbalance, Hypernatremia, Prognosis, Terminal cancer

General General

The Classification of Scientific Literature for Its Topical Tracking on a Small Human-Prepared Dataset.

In Studies in health technology and informatics ; h5-index 23.0

The number of scientific publications is constantly growing to make their processing extremely time-consuming. We hypothesized that a user-defined literature tracking may be augmented by machine learning on article summaries. A specific dataset of 671 article abstracts was obtained and nineteen binary classification options using machine learning (ML) techniques on various text representations were proposed in a pilot study. 300 tests with resamples were performed for each classification option. The best classification option demonstrated AUC = 0.78 proving the concept in general and indicating a potential for solution improvement.

Danilov Gleb, Ishankulov Timur, Orlov Yuriy, Shifrin Mikhail, Kotik Konstantin, Potapov Alexander

2020-Jun-26

Text classification, artificial intelligence, machine learning, natural language processing, neurosurgery, topic modeling

General General

A Comparative Study of the Arden Syntax and GDL Clinical Knowledge Representation Languages.

In Studies in health technology and informatics ; h5-index 23.0

The expressiveness of a medical knowledge representation language has significant impact on the effectiveness of a knowledge-based clinical decision support system. We assess the expressiveness of two such languages, Arden Syntax and the Guideline Definition Language. Using data extracted from both languages' specifications, we quantify expressiveness by means of language syntax and the number of supported operators. Preliminary results show that Arden Syntax is a more dynamic standard, having better readability and a higher number and more diverse operators than GDL. In contrast, GDL is a more rigid language that utilizes an underlying data model specification in the openEHR framework.

de Bruin Jeroen S, Chen Rong, Rappelsberger Andrea, Adlassnig Klaus-Peter

2020-Jun-26

Arden Syntax, Clinical, Decision Support Systems, Guideline Definition Language

General General

PrositNG - A Machine Learning Supported Disease Model Generation Software.

In Studies in health technology and informatics ; h5-index 23.0

Decision models (DM), especially Markov Models, play an essential role in the economic evaluation of new medical interventions. The process of DM generation requires expert knowledge of the medical domain and is a time-consuming task. Therefore, the authors propose a new model generation software PrositNG that is connectable to database systems of real-world routine care data. The structure of the model is derived from the entries in a database system by the help of Machine Learning algorithms. The software was implemented with the programming language Java. Two data sources were successfully utilized to demonstrate the value of PrositNG. However, a good understanding of the local documentation routine and software is paramount to use real-world data for model generation.

Pobiruchin Monika, Zowalla Richard, Kurscheidt Maximilian, Schramm Wendelin

2020-Jun-26

electronic health records, machine learning, markov processes, medical economics, real-world data

General General

Epileptic seizure detection using deep learning techniques: A Review

ArXiv Preprint

A variety of screening approaches have been proposed to diagnose epileptic seizures, using Electroencephalography (EEG) and Magnetic Resonance Imaging (MRI) modalities. Artificial intelligence encompasses a variety of areas, and one of its branches is deep learning. Before the rise of deep learning, conventional machine learning algorithms involving feature extraction were performed. This limited their performance to the ability of those handcrafting the features. However, in deep learning, the extraction of features and classification is entirely automated. The advent of these techniques in many areas of medicine such as diagnosis of epileptic seizures, has made significant advances. In this study, a comprehensive overview of the types of deep learning methods exploited to diagnose epileptic seizures from various modalities has been studied. Additionally, hardware implementation and cloud-based works are discussed as they are most suited for applied medicine.

Afshin Shoeibi, Navid Ghassemi, Marjane Khodatars, Mahboobeh Jafari, Sadiq Hussain, Roohallah Alizadehsani, Parisa Moridian, Abbas Khosravi, Hossein Hosseini-Nejad, Modjtaba Rouhani, Assef Zare, Saeid Nahavandi, Dipti Srinivasan, Amir F. Atiya, U. Rajendra Acharya

2020-07-02

Public Health Public Health

Predicting Vibrio cholerae infection and disease severity using metagenomics in a prospective cohort study.

In The Journal of infectious diseases ; h5-index 82.0

BACKGROUND : Susceptibility to Vibrio cholerae infection is impacted by blood group, age, and pre-existing immunity, but these factors only partially explain who becomes infected. A recent study used 16S rRNA amplicon sequencing to quantify the composition of the gut microbiome and identify predictive biomarkers of infection with limited taxonomic resolution.

METHODS : To achieve increased resolution of gut microbial factors associated with V. cholerae susceptibility and identify predictors of symptomatic disease, we applied deep shotgun metagenomic sequencing to a cohort of household contacts of patients with cholera.

RESULTS : Using machine learning, we resolved species, strains, gene families, and cellular pathways in the microbiome at the time of exposure to V. cholerae to identify markers that predict infection and symptoms. Use of metagenomic features improved the precision and accuracy of prediction relative to 16S sequencing. We also predicted disease severity, although with greater uncertainty than our infection prediction. Species within the genera Prevotella and Bifidobacterium predicted protection from infection, and genes involved in iron metabolism also correlated with protection.

CONCLUSION : Our results highlight the power of metagenomics to predict disease outcomes and suggest specific species and genes for experimental testing to investigate mechanisms of microbiome-related protection from cholera.

Levade Inès, Saber Morteza M, Midani Firas, Chowdhury Fahima, Khan Ashraful I, Begum Yasmin A, Ryan Edward T, David Lawrence A, Calderwood Stephen B, Harris Jason B, LaRocque Regina C, Qadri Firdausi, Shapiro B Jesse, Weil Ana A

2020-Jul-01

\n Vibrio cholerae\n , cholera, machine learning, metagenomics, microbiome

General General

Machine learning for predicting greenhouse gas emissions from agricultural soils.

In The Science of the total environment

Machine learning (ML) models are increasingly used to study complex environmental phenomena with high variability in time and space. In this study, the potential of exploiting three categories of ML regression models, including classical regression, shallow learning and deep learning for predicting soil greenhouse gas (GHG) emissions from an agricultural field was explored. Carbon dioxide (CO2) and nitrous oxide (N2O) fluxes, as well as various environmental, agronomic and soil data were measured at the site over a five-year period in Quebec, Canada. The rigorous analysis, which included statistical comparison and cross-validation for the prediction of CO2 and N2O fluxes, confirmed that the LSTM model performed the best among the considered ML models with the highest R coefficient and the lowest root mean squared error (RMSE) values (R = 0.87 and RMSE = 30.3 mg·m-2·hr-1 for CO2 flux prediction and R = 0.86 and RMSE = 0.19 mg·m-2·hr-1 for N2O flux prediction). The predictive performances of LSTM were more accurate than those simulated in a previous study conducted by a biophysical-based Root Zone Water Quality Model (RZWQM2). The classical regression models (namely RF, SVM and LASSO) satisfactorily simulated cyclical and seasonal variations of CO2 fluxes (R = 0.75, 0.71 and 0.68, respectively); however, they failed to reasonably predict the peak values of N2O fluxes (R < 0.25). Shallow ML was found to be less effective in predicting GHG fluxes than other considered ML models (R < 0.7 for CO2 flux and R < 0.3 for estimating N2O fluxes) and was the most sensitive to hyperparameter tuning. Based on this comprehensive comparison study, it was elicited that the LSTM model can be employed successfully in simulating GHG emissions from agricultural soils, providing a new perspective on the application of machine learning modeling for predicting GHG emissions to the environment.

Hamrani Abderrachid, Akbarzadeh Abdolhamid, Madramootoo Chandra A

2020-Jun-19

Agricultural soil, Classical regression, Deep learning, Greenhouse gas emissions, Machine learning, Shallow learning

General General

Blurred lines: integrating emerging technologies to advance plant biosecurity.

In Current opinion in plant biology

Plant diseases threaten global food security and biodiversity. Rapid dispersal of pathogens particularly via human means has accelerated in recent years. Timely detection of plant pathogens is essential to limit their spread. At the same time, international regulations must keep abreast of advances in plant disease diagnostics. In this review we describe recent progress in developing modern plant disease diagnostics based on detection of pathogen components, high-throughput image analysis, remote sensing, and machine learning. We discuss how different diagnostic approaches can be integrated in detection frameworks that can work at different scales and account for sampling biases. Lastly, we briefly discuss the requirements to apply these advances under regulatory settings to improve biosecurity measures globally.

Hu Yiheng, Wilson Salome, Schwessinger Benjamin, Rathjen John P

2020-Jun-28

oncology Oncology

Development and Clinical Validation of a 90-Gene Expression Assay for Identifying Tumor Tissue Origin.

In The Journal of molecular diagnostics : JMD

The accurate identification of tissue origin in patients with metastatic cancer is critical for effective treatment selection but remains a challenge. In this study, we aim to develop a gene expression assay for tumor molecular classification and integrate it with clinicopathological evaluations to identify the tissue origin for cancer of uncertain primary (CUP). A 90-gene expression signature covering 21 tumor types was identified and validated with an overall accuracy of 89.8% (95% CI, 0.87-0.92) in 609 tumor samples. More specifically, the classification accuracy reached 90.4% (95% CI: 0.87-0.93) for 323 primary tumors and 89.2% (95% CI: 0.85-0.92) for 286 metastatic tumors, with no statistically significant difference (P = 0.71). Furthermore, in a real-life cohort of 141 CUP patients, predictions by the 90-gene expression signature were consistent or compatible with the clinicopathologic features in 71.6% of patients (101/141). Our findings suggest that this novel gene expresion assay could efficiently predict the primary origin for a broad spectrum of tumor types and support its diagnostic utility of molecular classification in difficult-to-diagnose metastatic cancer. Additional studies are ongoing to further evaluate the clinical utility of this novel gene expression assay in predicting primary site and directing therapy for CUP patients.

Ye Qing, Wang Qifeng, Qi Peng, Chen Jinying, Sun Yifeng, Jin Shichai, Ren Wanli, Chen Chengshu, Liu Mei, Xu Midie, Ji Gang, Yang Jun, Nie Ling, Xu Qinghua, Huang Deshuang, Du Xiang, Zhou Xiaoyan

2020-Jun-28

General General

A Chromatin Accessibility Atlas of the Developing Human Telencephalon.

In Cell ; h5-index 250.0

To discover regulatory elements driving the specificity of gene expression in different cell types and regions of the developing human brain, we generated an atlas of open chromatin from nine dissected regions of the mid-gestation human telencephalon, as well as microdissected upper and deep layers of the prefrontal cortex. We identified a subset of open chromatin regions (OCRs), termed predicted regulatory elements (pREs), that are likely to function as developmental brain enhancers. pREs showed temporal, regional, and laminar differences in chromatin accessibility and were correlated with gene expression differences across regions and gestational ages. We identified two functional de novo variants in a pRE for autism risk gene SLC6A1, and using CRISPRa, demonstrated that this pRE regulates SCL6A1. Additionally, mouse transgenic experiments validated enhancer activity for pREs proximal to FEZF2 and BCL11A. Thus, this atlas serves as a resource for decoding neurodevelopmental gene regulation in health and disease.

Markenscoff-Papadimitriou Eirene, Whalen Sean, Przytycki Pawel, Thomas Reuben, Binyameen Fadya, Nowakowski Tomasz J, Kriegstein Arnold R, Sanders Stephan J, State Matthew W, Pollard Katherine S, Rubenstein John L

2020-Jun-24

ATAC-seq, autism, chromatin, enhancers, gene regulation, machine learning, neurodevelopment, neuropsychiatric disorders

Radiology Radiology

Accelerating quantitative MR imaging with the incorporation of B1 compensation using deep learning.

In Magnetic resonance imaging

Quantitative magnetic resonance imaging (MRI) attracts attention due to its support to quantitative image analysis and data driven medicine. However, the application of quantitative MRI is severely limited by the long data acquisition time required by repetitive image acquisition and measurement of field map. Inspired by recent development of artificial intelligence, we propose a deep learning strategy to accelerate the acquisition of quantitative MRI, where every quantitative T1 map is derived from two highly undersampled variable-contrast images with radiofrequency field inhomogeneity automatically compensated. In a multi-step framework, variable-contrast images are first jointly reconstructed from incoherently undersampled images using convolutional neural networks; then T1 map and B1 map are predicted from reconstructed images employing deep learning. Thus, the acceleration includes undersampling in every input image, a reduction in the number of variable contrast images, as well as a waiver of B1 map measurement. The strategy is validated in T1 mapping of cartilage. Acquired with a consistent imaging protocol, 1224 image sets from 51 subjects are used for the training of the prediction models, and 288 image sets from 12 subjects are used for testing. High degree of acceleration is achieved with image fidelity well maintained. The proposed method can be broadly applied to quantify other tissue properties (e.g. T2, T) as well.

Wu Yan, Ma Yajun, Du Jiang, Xing Lei

2020-Jun-28

General General

High-resolution bathymetry by deep-learning-based image superresolution.

In PloS one ; h5-index 176.0

Seafloor mapping to create bathymetric charts of the oceans is important for various applications. However, making high-resolution bathymetric charts requires measuring underwater depths at many points in sea areas, and thus, is time-consuming and costly. In this work, treating gridded bathymetric data as digital images, we employ the image-processing technique known as superresolution to enhance the resolution of bathymetric charts by estimating high-resolution images from low-resolution ones. Specifically, we use the recently-developed deep-learning methodology to automatically learn the geometric features of ocean floors and recover their details. Through an experiment using bathymetric data around Japan, we confirmed that the proposed method outperforms naive interpolation both qualitatively and quantitatively, observing an eight-dB average improvement in peak signal-to-noise ratio. Deep-learning-based bathymetric image superresolution can significantly reduce the number of sea areas or points that must be measured, thereby accelerating the detailed mapping of the seafloor and the creation of high-resolution bathymetric charts around the globe.

Sonogashira Motoharu, Shonai Michihiro, Iiyama Masaaki

2020

Public Health Public Health

Using Machine Learning Algorithms to Predict Antimicrobial Resistance and Assist Empirical Treatment.

In Studies in health technology and informatics ; h5-index 23.0

Multi-drug-resistant (MDR) infections and their devastating consequences constitute a global problem and a constant threat to public health with immense costs for their treatment. Early identification of the pathogen and its antibiotic resistance profile is crucial for a favorable outcome. Given the fact that more than 24 hours are usually required to perform common antibiotic resistance tests after the sample collection, the implementation of machine learning methods could be of significant help in selecting empirical antibiotic treatment based only on the sample type, Gram stain, and patient's basic characteristics. In this paper, five machine learning (ML) algorithms have been tested to determine antibiotic susceptibility predictions using simple demographic data of the patients, as well as culture results and antibiotic susceptibility tests. Implementing ML algorithms to antimicrobial susceptibility data may offer insightful antibiotic susceptibility predictions to assist clinicians in decision-making regarding empirical treatment.

Feretzakis Georgios, Loupelis Evangelos, Sakagianni Aikaterini, Kalles Dimitris, Lada Malvina, Christopoulos Constantinos, Dimitrellos Evangelos, Martsoukou Maria, Skarmoutsou Nikoleta, Petropoulou Stavroula, Alexiou Konstantinos, Velentza Aikaterini, Michelidou Sophia, Valakis Konstantinos

2020-Jun-26

AMR, Antibiotic resistance, Machine Learning

Pathology Pathology

MSA-MIL: A deep residual multiple instance learning model based on multi-scale annotation for classification and visualization of glomerular spikes

ArXiv Preprint

Membranous nephropathy (MN) is a frequent type of adult nephrotic syndrome, which has a high clinical incidence and can cause various complications. In the biopsy microscope slide of membranous nephropathy, spikelike projections on the glomerular basement membrane is a prominent feature of the MN. However, due to the whole biopsy slide contains large number of glomeruli, and each glomerulus includes many spike lesions, the pathological feature of the spikes is not obvious. It thus is time-consuming for doctors to diagnose glomerulus one by one and is difficult for pathologists with less experience to diagnose. In this paper, we establish a visualized classification model based on the multi-scale annotation multi-instance learning (MSA-MIL) to achieve glomerular classification and spikes visualization. The MSA-MIL model mainly involves three parts. Firstly, U-Net is used to extract the region of the glomeruli to ensure that the features learned by the succeeding algorithm are focused inside the glomeruli itself. Secondly, we use MIL to train an instance-level classifier combined with MSA method to enhance the learning ability of the network by adding a location-level labeled reinforced dataset, thereby obtaining an example-level feature representation with rich semantics. Lastly, the predicted scores of each tile in the image are summarized to obtain glomerular classification and visualization of the classification results of the spikes via the usage of sliding window method. The experimental results confirm that the proposed MSA-MIL model can effectively and accurately classify normal glomeruli and spiked glomerulus and visualize the position of spikes in the glomerulus. Therefore, the proposed model can provide a good foundation for assisting the clinical doctors to diagnose the glomerular membranous nephropathy.

Yilin Chen, Ming Li, Yongfei Wu, Xueyu Liu, Fang Hao, Daoxiang Zhou, Xiaoshuang Zhou, Chen Wang

2020-07-02

oncology Oncology

Feature sensitivity criterion-based sampling strategy from the Optimization based on Phylogram Analysis (Fs-OPA) and Cox regression applied to mental disorder datasets.

In PloS one ; h5-index 176.0

Digital datasets in several health care facilities, as hospitals and prehospital services, accumulated data from thousands of patients for more than a decade. In general, there is no local team with enough experts with the required different skills capable of analyzing them in entirety. The integration of those abilities usually demands a relatively long-period and is cost. Considering that scenario, this paper proposes a new Feature Sensitivity technique that can automatically deal with a large dataset. It uses a criterion-based sampling strategy from the Optimization based on Phylogram Analysis. Called FS-opa, the new approach seems proper for dealing with any types of raw data from health centers and manipulate their entire datasets. Besides, FS-opa can find the principal features for the construction of inference models without depending on expert knowledge of the problem domain. The selected features can be combined with usual statistical or machine learning methods to perform predictions. The new method can mine entire datasets from scratch. FS-opa was evaluated using a relatively large dataset from electronic health records of mental disorder prehospital services in Brazil. Cox's approach was integrated to FS-opa to generate survival analysis models related to the length of stay (LOS) in hospitals, assuming that it is a relevant aspect that can benefit estimates of the efficiency of hospitals and the quality of patient treatments. Since FS-opa can work with raw datasets, no knowledge from the problem domain was used to obtain the preliminary prediction models found. Results show that FS-opa succeeded in performing a feature sensitivity analysis using only the raw data available. In this way, FS-opa can find the principal features without bias of an inference model, since the proposed method does not use it. Moreover, the experiments show that FS-opa can provide models with a useful trade-off according to their representativeness and parsimony. It can benefit further analyses by experts since they can focus on aspects that benefit problem modeling.

Gholi Zadeh Kharrat Fatemeh, Shydeo Brandão Miyoshi Newton, Cobre Juliana, Mazzoncini De Azevedo-Marques João, Mazzoncini de Azevedo-Marques Paulo, Cláudio Botazzo Delbem Alexandre

2020

General General

Fiber directional position sensor based on multimode interference imaging and machine learning.

In Applied optics

A fiber directional position sensor based on multimode interference and image processing by machine learning is presented. Upon single-mode injection, light in multimode fiber generates a multi-ring-shaped interference pattern at the end facet, which is susceptible to the amplitude and direction of the fiber distortions. The fiber is mounted on an automatic translation stage, with repeating movement in four directions. The images are captured from an infrared camera and fed to a machine-learning program to train, validate, and test the fiber conditions. As a result, accuracy over 97% is achieved in recognizing fiber positions in these four directions, each with 10 classes, totaling an 8 mm span. The number of images taken for each class is merely 320. Detailed investigation reveals that the system can achieve over 60% accuracy in recognizing positions on a 5 µm resolution with a larger dataset, approaching the limit of the chosen translation stage.

Sun Kai, Ding Zhenming, Zhang Ziyang

2020-Jul-01

General General

Accurate stacked-sheet counting method based on deep learning.

In Journal of the Optical Society of America. A, Optics, image science, and vision

The accurate counting of laminated sheets, such as packing or printing sheets in industry, is extremely important because it greatly affects the economic cost. However, the different thicknesses, adhesion properties, and breakage points and the low contrast of sheets remain challenges to traditional counting methods based on image processing. This paper proposes a new stacked-sheet counting method with a deep learning approach using the U-Net architecture. A specific dataset according to the characteristics of stack side images is collected. The stripe of the center line of each sheet is used for semantic segmentation, and the complete side images of the slices are segmented via training with small image patches and testing with original large images. With this model, each pixel is classified by multi-layer convolution and deconvolution to determine whether it is the target object to be detected. After the model is trained, the test set is used to test the model, and a center region segmentation map based on the pixel points is obtained. By calculating the statistical median value of centerline points across different sections in these segmented images, the number of sheets can be obtained. Compared with traditional image algorithms in real product counting experiments, the proposed method can achieve better performance with higher accuracy and a lower error rate.

Pham Dieuthuy, Ha Minhtuan, San Cao, Xiao Changyan

2020-Jul-01

Radiology Radiology

AI in Medical Imaging Informatics: Current Challenges and Future Directions.

In IEEE journal of biomedical and health informatics

This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.

Panayides Andreas S, Amini Amir, Filipovic Nenad D, Sharma Ashish, Tsaftaris Sotirios A, Young Alistair, Foran David, Do Nhan, Golemati Spyretta, Kurc Tahsin, Huang Kun, Nikita Konstantina S, Veasey Ben P, Zervakis Michalis, Saltz Joel H, Pattichis Constantinos S

2020-Jul

General General

Automated Detection of TMJ Osteoarthritis Based on Artificial Intelligence.

In Journal of dental research ; h5-index 65.0

The purpose of this study was to develop a diagnostic tool to automatically detect temporomandibular joint osteoarthritis (TMJOA) from cone beam computed tomography (CBCT) images with artificial intelligence. CBCT images of patients diagnosed with temporomandibular disorder were included for image preparation. Single-shot detection, an object detection model, was trained with 3,514 sagittal CBCT images of the temporomandibular joint that showed signs of osseous changes in the mandibular condyle. The region of interest (condylar head) was defined and classified into 2 categories-indeterminate for TMJOA and TMJOA-according to image analysis criteria for the diagnosis of temporomandibular disorder. The model was tested with 2 sets of 300 images in total. The average accuracy, precision, recall, and F1 score over the 2 test sets were 0.86, 0.85, 0.84, and 0.84, respectively. Automated detection of TMJOA from sagittal CBCT images is possible by using a deep neural networks model. It may be used to support clinicians with diagnosis and decision making for treatments of TMJOA.

Lee K S, Kwak H J, Oh J M, Jha N, Kim Y J, Kim W, Baik U B, Ryu J J

2020-Jul-01

automatic diagnosis, cone beam computed tomography, diagnostic accuracy, disease classification, lesion detection, single-shot detection

General General

Artificial Intelligence and Robotics in Nursing: Ethics of Caring as a Guide to Dividing Tasks Between AI and Humans.

In Nursing philosophy : an international journal for healthcare professionals

Nurses have traditionally been regarded as clinicians that deliver compassionate, safe, and empathetic health care (Nurses again outpace other professions for honesty & ethics, 2018). Caring is a fundamental characteristic, expectation, and moral obligation of the nursing and caregiving professions (Nursing: Scope and standards of practice, American Nurses Association, Silver Spring, MD, 2015). Along with caring, nurses are expected to undertake ever-expanding duties and complex tasks. In part because of the growing physical, intellectual and emotional demandingness, of nursing as well as technological advances, artificial intelligence (AI) and AI care robots are rapidly changing the healthcare landscape. As technology becomes more advanced, efficient, and economical, opportunities and pressure to introduce AI into nursing care will only increase. In the first part of the article, we review recent and existing applications of AI in nursing and speculate on future use. Second, situate our project within the recent literature on the ethics of nursing and AI. Third, we explore three dominant theories of caring and the two paradigmatic expressions of caring (touch and presence) and conclude that AI-at least for the foreseeable future-is incapable of caring in the sense central to nursing and caregiving ethics. We conclude that for AI to be implemented ethically, it cannot transgress the core values of nursing, usurp aspects of caring that can only meaningfully be carried out by human beings, and it must support, open, or improve opportunities for nurses to provide the uniquely human aspects of care.

Stokes Felicia, Palmer Amitabha

2020-Jul-01

artificial intelligence, ethics, ethics of caring, nursing, robotics

General General

An Ensemble Learning Strategy for Eligibility Criteria Text Classification for Clinical Trial Recruitment: Algorithm Development and Validation.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Eligibility criteria are the main strategy for screening appropriate participants for clinical trials. Automatic analysis of clinical trial eligibility criteria by digital screening, leveraging natural language processing techniques, can improve recruitment efficiency and reduce the costs involved in promoting clinical research.

OBJECTIVE : We aimed to create a natural language processing model to automatically classify clinical trial eligibility criteria.

METHODS : We proposed a classifier for short text eligibility criteria based on ensemble learning, where a set of pretrained models was integrated. The pretrained models included state-of-the-art deep learning methods for training and classification, including Bidirectional Encoder Representations from Transformers (BERT), XLNet, and A Robustly Optimized BERT Pretraining Approach (RoBERTa). The classification results by the integrated models were combined as new features for training a Light Gradient Boosting Machine (LightGBM) model for eligibility criteria classification.

RESULTS : Our proposed method obtained an accuracy of 0.846, a precision of 0.803, and a recall of 0.817 on a standard data set from a shared task of an international conference. The macro F1 value was 0.807, outperforming the state-of-the-art baseline methods on the shared task.

CONCLUSIONS : We designed a model for screening short text classification criteria for clinical trials based on multimodel ensemble learning. Through experiments, we concluded that performance was improved significantly with a model ensemble compared to a single model. The introduction of focal loss could reduce the impact of class imbalance to achieve better performance.

Zeng Kun, Pan Zhiwei, Xu Yibin, Qu Yingying

2020-Jul-01

Clinical trial, Deep learning, Eligibility criteria, Ensemble learning, Text classification

Surgery Surgery

Prospective Study Evaluating a Pain Assessment Tool in a Postoperative Environment: Protocol for Algorithm Testing and Enhancement.

In JMIR research protocols ; h5-index 26.0

BACKGROUND : Assessment of pain is critical to its optimal treatment. There is a high demand for accurate objective pain assessment for effectively optimizing pain management interventions. However, pain is a multivalent, dynamic, and ambiguous phenomenon that is difficult to quantify, particularly when the patient's ability to communicate is limited. The criterion standard of pain intensity assessment is self-reporting. However, this unidimensional model is disparaged for its oversimplification and limited applicability in several vulnerable patient populations. Researchers have attempted to develop objective pain assessment tools through analysis of physiological pain indicators, such as electrocardiography, electromyography, photoplethysmography, and electrodermal activity. However, pain assessment by using only these signals can be unreliable, as various other factors alter these vital signs and the adaptation of vital signs to pain stimulation varies from person to person. Objective pain assessment using behavioral signs such as facial expressions has recently gained attention.

OBJECTIVE : Our objective is to further the development and research of a pain assessment tool for use with patients who are likely experiencing mild to moderate pain. We will collect observational data through wearable technologies, measuring facial electromyography, electrocardiography, photoplethysmography, and electrodermal activity.

METHODS : This protocol focuses on the second phase of a larger study of multimodal signal acquisition through facial muscle electrical activity, cardiac electrical activity, and electrodermal activity as indicators of pain and for building predictive models. We used state-of-the-art standard sensors to measure bioelectrical electromyographic signals and changes in heart rate, respiratory rate, and oxygen saturation. Based on the results, we further developed the pain assessment tool and reconstituted it with modern wearable sensors, devices, and algorithms. In this second phase, we will test the smart pain assessment tool in communicative patients after elective surgery in the recovery room.

RESULTS : Our human research protections application for institutional review board review was approved for this part of the study. We expect to have the pain assessment tool developed and available for further research in early 2021. Preliminary results will be ready for publication during fall 2020.

CONCLUSIONS : This study will help to further the development of and research on an objective pain assessment tool for monitoring patients likely experiencing mild to moderate pain.

INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) : DERR1-10.2196/17783.

Kasaeyan Naeini Emad, Jiang Mingzhe, Syrjälä Elise, Calderon Michael-David, Mieronkoski Riitta, Zheng Kai, Dutt Nikil, Liljeberg Pasi, Salanterä Sanna, Nelson Ariana M, Rahmani Amir M

2020-Jul-01

acute pain, health monitoring, machine learning, multimodal biosignals, pain measurement, pain, postoperative, wearable electronic devices

General General

Metabolite Structure Assignment Using in silico NMR Techniques.

In Analytical chemistry

A major challenge for Metabolomic analysis is to obtain an unambiguous identification of the metabolites detected in a sample. Among metabolomics techniques, NMR spectroscopy is a sophisticated, powerful, and generally applicable spectroscopic tool that can be used to ascertain the correct structure of newly isolated biogenic molecules. However, accurate structure prediction using computational NMR techniques depends on how much of the relevant conformational space of a particular compound is considered. It is intrinsically challenging to calculate NMR chemical shifts using high-level DFT when the conformation-al space of a metabolite is extensive. In this work, we developed NMR chemical shift calculation protocols using a machine learning model in conjunction with standard DFT methods. The pipeline encompasses the following steps: (1) conformation generation using a force field (FF) based method, (2) filtering the FF generated conformations using the ASE-ANI machine learning model, (3) clustering of the optimized conformations based on structural similarity to identify chemically unique conformations, (4) DFT structural optimization of the unique conformations and (5) DFT NMR chemical shift calculation. This protocol can calculate the NMR chemical shifts of a set of molecules using any available combination of DFT theory, solvent model, and NMR-active nuclei, using both user-selected reference compounds and/or linear regression methods. Our protocol reduces the overall computational time by 2 orders of magnitude (see Figure 1) over methods that optimize the conformations using fully ab initio methods, while still producing good agreement with experimental observations. The complete protocol is designed in such a manner that makes the computation of chemical shifts tractable for a large number of conformationally flexible metabolites.

Das Susanta, Edison Arthur S, Merz Kenneth M

2020-Jul-01

General General

[PM2.5 Inversion Using Remote Sensing Data in Eastern China Based on Deep Learning].

In Huan jing ke xue= Huanjing kexue

PM2.5, which is a major source of air pollution, has a considerable impact on human health. In this study, a multi-element joint PM2.5 inversion method based on a deep learning model is proposed. With PM2.5 concentration as the ground truth, 10 elements including the Himawari-AOD daily data products, temperature, relative humidity, and pressure, were introduced as inversion elements. To verify the effectiveness of the method, the experiment was carried out by season using remote sensing data in Eastern China during 2016-2018. The results demonstrate that PM2.5 concentrations were positively correlated with AOD, precipitation, wind speed, and high vegetation cover index and negatively correlated with dwarf vegetation cover index. The correlation with temperature, humidity, pressure, and DEM changed with seasons. Comparative experiments indicated that the accuracy of PM2.5 inversion based on the deep neural network is higher than that of traditional linear and nonlinear models. R2 was above 0.5, and the error was small in each season. The R2 value for autumn, which showed the best inversion, was 0.86, that for summer was 0.75, that for winter was 0.613, and that for spring was 0.566. The visualization of the model illustrates that the inversion result of the DNN model is closer to the PM2.5 concentration distribution interpolated by the ground monitoring station, and the resolution is higher and more accurate.

Liu Lin-Yu, Zhang Yong-Jun, Li Yan-Sheng, Liu Xin-Yi, Wan Yi

2020-Apr-08

Eastern China, Himawari data, PM2.5, deep learning, inversion

General General

Semiautomated Approach for Muscle Weakness Detection in Clinical Texts.

In Studies in health technology and informatics ; h5-index 23.0

The automated detection of adverse events in medical records might be a cost-effective solution for patient safety management or pharmacovigilance. Our group proposed an information extraction algorithm (IEA) for detecting adverse events in neurosurgery using documents written in a natural rich-in-morphology language. In this paper, we challenge to optimize and evaluate its performance for the detection of any extremity muscle weakness in clinical texts. Our algorithm shows the accuracy of 0.96 and ROC AUC = 0.96 and might be easily implemented in other medical domains.

Danilov Gleb, Shifrin Michael, Strunina Yuliya, Kotik Konstantin, Tsukanova Tatyana, Pronkina Tatiana, Ishankulov Timur, Makashova Elizaveta, Kosyrkova Alexandra, Melchenko Semen, Zagidullin Timur, Potapov Alexander

2020-Jun-26

Adverse Events, Annotation, Natural Language Processing, Neurosurgery

Surgery Surgery

4D Spatio-Temporal Convolutional Networks for Object Position Estimation in OCT Volumes

ArXiv Preprint

Tracking and localizing objects is a central problem in computer-assisted surgery. Optical coherence tomography (OCT) can be employed as an optical tracking system, due to its high spatial and temporal resolution. Recently, 3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single volumetric OCT images. While this approach relied on spatial information only, OCT allows for a temporal stream of OCT image volumes capturing the motion of an object at high volumes rates. In this work, we systematically extend 3D CNNs to 4D spatio-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking. Across various architectures, our results demonstrate that using a stream of OCT volumes and employing 4D spatio-temporal convolutions leads to a 30% lower mean absolute error compared to single volume processing with 3D CNNs.

Marcel Bengs, Nils Gessert, Alexander Schlaefer

2020-07-02

General General

Bread And Durum Wheat Classification Using Wavelet Based Image Fusion.

In Journal of the science of food and agriculture

BACKGROUND : Wheat, which is an essential nutrient, is an important food source for human beings because it is used in flour and feed production. As in many nutrients, wheat plays an important role in macaroni and bread production. The types of wheat used for both foods are different, namely bread and durum wheat. A strong separation of these two wheat types is important for product quality. This paper differs from the traditional methods available for the identification of bread and durum wheat species. In this study, ultraviolet (UV) and White Light (WL) images of wheat are obtained for both species. Wheat types in these images are classified by various Machine Learning (ML) methods. Afterwards, these images are fused by wavelet based image fusion method.

RESULTS : The highest accuracy value calculated using only UV and only WL image is 94.8276% and these accuracies are obtained by SVM and MLP algorithms, respectively. However, this accuracy value is 98.2759% for the fusion image and both MLP and SVM achieved the same success.

CONCLUSION : Wavelet based fusion has increased the classification accuracy of all three learning algorithms. It is concluded that the identification ability in the resulting fusion image is higher than the other two raw images. This article is protected by copyright. All rights reserved.

Sabanci Kadir, Aslan Muhammet Fatih, Durdu Akif

2020-Jul-01

bread wheat, durum wheat, machine learning, wavelet based image fusion

General General

DeepTorrent: a deep learning-based approach for predicting DNA N4-methylcytosine sites.

In Briefings in bioinformatics

DNA N4-methylcytosine (4mC) is an important epigenetic modification that plays a vital role in regulating DNA replication and expression. However, it is challenging to detect 4mC sites through experimental methods, which are time-consuming and costly. Thus, computational tools that can identify 4mC sites would be very useful for understanding the mechanism of this important type of DNA modification. Several machine learning-based 4mC predictors have been proposed in the past 3 years, although their performance is unsatisfactory. Deep learning is a promising technique for the development of more accurate 4mC site predictions. In this work, we propose a deep learning-based approach, called DeepTorrent, for improved prediction of 4mC sites from DNA sequences. It combines four different feature encoding schemes to encode raw DNA sequences and employs multi-layer convolutional neural networks with an inception module integrated with bidirectional long short-term memory to effectively learn the higher-order feature representations. Dimension reduction and concatenated feature maps from the filters of different sizes are then applied to the inception module. In addition, an attention mechanism and transfer learning techniques are also employed to train the robust predictor. Extensive benchmarking experiments demonstrate that DeepTorrent significantly improves the performance of 4mC site prediction compared with several state-of-the-art methods.

Liu Quanzhong, Chen Jinxiang, Wang Yanze, Li Shuqin, Jia Cangzhi, Song Jiangning, Li Fuyi

2020-Jul-01

DNA N4-methylcytosine sites, bioinformatics, deep learning, machine learning, sequence analysis

General General

Biventricular imaging markers to predict outcomes in non-compaction cardiomyopathy: a machine learning study.

In ESC heart failure

AIMS : Left ventricular non-compaction cardiomyopathy (LVNC) is a genetic heart disease, with heart failure, arrhythmias, and embolic events as main clinical manifestations. The goal of this study was to analyse a large set of echocardiographic (echo) and cardiac magnetic resonance imaging (CMRI) parameters using machine learning (ML) techniques to find imaging predictors of clinical outcomes in a long-term follow-up of LVNC patients.

METHODS AND RESULTS : Patients with echo and/or CMRI criteria of LVNC, followed from January 2011 to December 2017 in the heart failure section of a tertiary referral cardiologic hospital, were enrolled in a retrospective study. Two-dimensional colour Doppler echocardiography and subsequent CMRI were carried out. Twenty-four hour Holter monitoring was also performed in all patients. Death, cardiac transplantation, heart failure hospitalization, aborted sudden cardiac death, complex ventricular arrhythmias (sustained and non-sustained ventricular tachycardia), and embolisms (i.e. stroke, pulmonary thromboembolism and/or peripheral arterial embolism) were registered and were referred to as major adverse cardiovascular events (MACEs) in this study. Recruited for the study were 108 LVNC patients, aged 38.3 ± 15.5 years, 48.1% men, diagnosed by echo and CMRI criteria. They were followed for 5.8 ± 3.9 years, and MACEs were registered. CMRI and echo parameters were analysed via a supervised ML methodology. Forty-seven (43.5%) patients had at least one MACE. The best performance of imaging variables was achieved by combining four parameters: left ventricular (LV) ejection fraction (by CMRI), right ventricular (RV) end-systolic volume (by CMRI), RV systolic dysfunction (by echo), and RV lower diameter (by CMRI) with accuracy, sensitivity, and specificity rates of 75.5%, 77%, 75%, respectively.

CONCLUSIONS : Our findings show the importance of biventricular assessment to detect the severity of this cardiomyopathy and to plan for early clinical intervention. In addition, this study shows that even patients with normal LV function and negative late gadolinium enhancement had MACE. ML is a promising tool for analysing a large set of parameters to stratify and predict prognosis in LVNC patients.

Rocon Camila, Tabassian Mahdi, Tavares de Melo Marcelo Dantas, de Araujo Filho Jose Arimateia, Grupi Cesar José, Parga Filho Jose Rodrigues, Bocchi Edimar Alcides, D’hooge Jan, Salemi Vera Maria Cury

2020-Jun-30

Cardiomyopathy, Echocardiography, Follow-up, Machine learning, Magnetic resonance imaging, Non-compaction

Surgery Surgery

Achalasia subtypes can be identified with functional luminal imaging probe (FLIP) panometry using a supervised machine learning process.

In Neurogastroenterology and motility : the official journal of the European Gastrointestinal Motility Society

BACKGROUND : Achalasia subtypes on high-resolution manometry (HRM) prognosticate treatment response and help direct management plan. We aimed to utilize parameters of distension-induced contractility and pressurization on functional luminal imaging probe (FLIP) panometry and machine learning to predict HRM achalasia subtypes.

METHODS : One hundred eighty adult patients with treatment-naïve achalasia defined by HRM per Chicago Classification (40 type I, 99 type II, 41 type III achalasia) who underwent FLIP panometry were included: 140 patients were used as the training cohort and 40 patients as the test cohort. FLIP panometry studies performed with 16-cm FLIP assemblies were retrospectively analyzed to assess distensive pressure and distension-induced esophageal contractility. Correlation analysis, single tree, and random forest were adopted to develop classification trees to identify achalasia subtypes.

KEY RESULTS : Intra-balloon pressure at 60 mL fill volume, and proportions of patients with absent contractile response, repetitive retrograde contractile pattern, occluding contractions, sustained occluding contractions (SOC), contraction-associated pressure changes >10 mm Hg all differed between HRM achalasia subtypes and were used to build the decision tree-based classification model. The model identified spastic (type III) vs non-spastic (types I and II) achalasia with 90% and 78% accuracy in the train and test cohorts, respectively. Achalasia subtypes I, II, and III were identified with 71% and 55% accuracy in the train and test cohorts, respectively.

CONCLUSIONS AND INFERENCES : Using a supervised machine learning process, a preliminary model was developed that distinguished type III achalasia from non-spastic achalasia with FLIP panometry. Further refinement of the measurements and more experience (data) may improve its ability for clinically relevant application.

Carlson Dustin A, Kou Wenjun, Rooney Katharine P, Baumann Alexandra J, Donnan Erica, Triggs Joseph R, Teitelbaum Ezra N, Holmstrom Amy, Hungness Eric, Sethi Sajiv, Kahrilas Peter J, Pandolfino John E

2020-Jul-01

dysphagia, endoscopy, impedance, manometry, peristalsis

General General

A Self-Powered Angle Sensor at Nanoradian-Resolution for Robotic Arms and Personalized Medicare.

In Advanced materials (Deerfield Beach, Fla.)

As the dominant component for precise motion measurement, angle sensors play a vital role in robotics, machine control, and personalized rehabilitation. Various forms of angle sensors have been developed and optimized over the past decades, but none of them would function without an electric power. Here, a highly sensitive triboelectric self-powered angle sensor (SPAS) exhibiting the highest resolution (2.03 nano-radian) after a comprehensive optimization is reported. In addition, the SPAS holds merits of light weight and thin thickness, which enables its extensive integrated applications with minimized energy consumption: a palletizing robotic arm equipped with the SPAS can precisely reproduce traditional Chinese calligraphy via angular data it collects. In addition, the SPAS can be assembled in a medicare brace to record the flexion/extension of joints, which may benefit personalized orthopedic recuperation. The SPAS paves a new approach for applications in the emerging fields of robotics, sensing, personalized medicare, and artificial intelligence.

Wang Ziming, An Jie, Nie Jinhui, Luo Jianjun, Shao Jiajia, Jiang Tao, Chen Baodong, Tang Wei, Wang Zhong Lin

2020-Jun-30

Internet-of-Things, personalized healthcare, robotics, self-powered sensors, triboelectric nanogenerators

Radiology Radiology

Deep Learning Pre-training Strategy for Mammogram Image Classification: an Evaluation Study.

In Journal of digital imaging

In this work, we assess how pre-training strategy affects deep learning performance for the task of distinguishing false-recall from malignancy and normal (benign) findings in digital mammography images. A cohort of 1303 breast cancer screening patients (4935 digital mammogram images in total) was retrospectively analyzed as the target dataset for this study. We assessed six different convolutional neural network model structures utilizing four different imaging datasets (total > 1.4 million images (including ImageNet); medical images different in terms of scale, modality, organ, and source) for pre-training on six classification tasks to assess how the performance of CNN models varies based on training strategy. Representative pre-training strategies included transfer learning with medical and non-medical datasets, layer freezing, varied network structure, and multi-view input for both binary and triple-class classification of mammogram images. The area under the receiver operating characteristic curve (AUC) was used as the model performance metric. The best performing model out of all experimental settings was an AlexNet model incrementally pre-trained on ImageNet and a large Breast Density dataset. The AUC for the six classification tasks using this model ranged from 0.68 to 0.77. In the case of distinguishing recalled-benign mammograms from others, four out of five pre-training strategies tested produced significant performance differences from the baseline model. This study suggests that pre-training strategy influences significant performance differences, especially in the case of distinguishing recalled- benign from malignant and benign screening patients.

Clancy Kadie, Aboutalib Sarah, Mohamed Aly, Sumkin Jules, Wu Shandong

2020-Jun-30

Breast cancer, Deep learning, Digital mammography, Training strategy, Transfer learning

General General

MR Image-Based Attenuation Correction of Brain PET Imaging: Review of Literature on Machine Learning Approaches for Segmentation.

In Journal of digital imaging

Recent emerging hybrid technology of positron emission tomography/magnetic resonance (PET/MR) imaging has generated a great need for an accurate MR image-based PET attenuation correction. MR image segmentation, as a robust and simple method for PET attenuation correction, has been clinically adopted in commercial PET/MR scanners. The general approach in this method is to segment the MR image into different tissue types, each assigned an attenuation constant as in an X-ray CT image. Machine learning techniques such as clustering, classification and deep networks are extensively used for brain MR image segmentation. However, only limited work has been reported on using deep learning in brain PET attenuation correction. In addition, there is a lack of clinical evaluation of machine learning methods in this application. The aim of this review is to study the use of machine learning methods for MR image segmentation and its application in attenuation correction for PET brain imaging. Furthermore, challenges and future opportunities in MR image-based PET attenuation correction are discussed.

Mecheter Imene, Alic Lejla, Abbod Maysam, Amira Abbes, Ji Jim

2020-Jun-30

Deep learning, Image segmentation, MR image-based attenuation correction, Machine learning, PET/MR

General General

Multi-model Ensemble Learning Architecture Based on 3D CNN for Lung Nodule Malignancy Suspiciousness Classification.

In Journal of digital imaging

Classification of benign and malignant in lung nodules using chest CT images is a key step in the diagnosis of early-stage lung cancer, as well as an effective way to improve the patients' survival rate. However, due to the diversity of lung nodules and the visual similarity of lung nodules to their surrounding tissues, it is difficult to construct a robust classification model with conventional deep learning-based diagnostic methods. To address this problem, we propose a multi-model ensemble learning architecture based on 3D convolutional neural network (MMEL-3DCNN). This approach incorporates three key ideas: (1) Constructed multi-model network architecture can be well adapted to the heterogeneity of lung nodules. (2) The input that concatenated of the intensity image corresponding to the nodule mask, the original image, and the enhanced image corresponding to which can help training model to extract advanced feature with more discriminative capacity. (3) Select the corresponding model to different nodule size dynamically for prediction, which can improve the generalization ability of the model effectively. In addition, ensemble learning is applied in this paper to further improve the robustness of the nodule classification model. The proposed method has been experimentally verified on the public dataset, LIDC-IDRI. The experimental results show that the proposed MMEL-3DCNN architecture can obtain satisfactory classification results.

Liu Hong, Cao Haichao, Song Enmin, Ma Guangzhi, Xu Xiangyang, Jin Renchao, Liu Chuhua, Hung Chih-Cheng

2020-Jun-30

3D CNN, Benign and malignant classification, Computer-aided diagnosis, Image enhancement, Multi-model ensemble architecture

General General

Classification of Skin Lesions into Seven Classes Using Transfer Learning with AlexNet.

In Journal of digital imaging

Melanoma is deadly skin cancer. There is a high similarity between different kinds of skin lesions, which lead to incorrect classification. Accurate classification of a skin lesion in its early stages saves human life. In this paper, a highly accurate method proposed for the skin lesion classification process. The proposed method utilized transfer learning with pre-trained AlexNet. The parameters of the original model used as initial values, where we randomly initialize the weights of the last three replaced layers. The proposed method was tested using the most recent public dataset, ISIC 2018. Based on the obtained results, we could say that the proposed method achieved a great success where it accurately classifies the skin lesions into seven classes. These classes are melanoma, melanocytic nevus, basal cell carcinoma, actinic keratosis, benign keratosis, dermatofibroma, and vascular lesion. The achieved percentages are 98.70%, 95.60%, 99.27%, and 95.06% for accuracy, sensitivity, specificity, and precision, respectively.

Hosny Khalid M, Kassem Mohamed A, Fouad Mohamed M

2020-Jun-30

AlexNet, Classification of skin lesions, ISIC 2018, Melanoma, Transfer learning

Radiology Radiology

Accurate prediction of lumbar microdecompression level with an automated MRI grading system.

In Skeletal radiology

OBJECTIVE : Lumbar spine MRI interpretations have high variability reducing utility for surgical planning. This study evaluated a convolutional neural network (CNN) framework that generates automated MRI grading for its ability to predict the level that was surgically decompressed.

MATERIALS AND METHODS : Patients who had single-level decompression were retrospectively evaluated. Sagittal T2 images were processed by a CNN (SpineNet), which provided grading for the following: central canal stenosis, disc narrowing, disc degeneration, spondylolisthesis, upper/lower endplate morphologic changes, and upper/lower marrow changes. The grades were used to calculate an aggregate score. The variables and the aggregate score were analyzed for their ability to predict the surgical level. For each surgical level subgroup, the surgical level aggregate scores were compared with the non-surgical levels.

RESULTS : A total of 141 patients met the inclusion criteria (82 women, 59 men; mean age 64 years; age range 28-89 years). SpineNet did not identify central canal stenosis in 32 patients. Of the remaining 109, 96 (88%) patients had a decompression at the level of greatest stenosis. The higher stenotic grade was present only at the surgical level in 82/96 (85%) patients. The level with the highest aggregate score matched the surgical level in 103/141 (73%) patients and was unique to the surgical level in 91/103 (88%) patients. Overall, the highest aggregate score identified the surgical level in 91/141 (65%) patients. The aggregate MRI score mean was significantly higher for the L3-S1 surgical levels.

CONCLUSION : A previously developed CNN framework accurately predicts the level of microdecompression for degenerative spinal stenosis in most patients.

Roller Brandon L, Boutin Robert D, O’Gara Tadhg J, Knio Ziyad O, Jamaludin Amir, Tan Josh, Lenchik Leon

2020-Jul-01

Automated diagnosis, Low back pain, Lumbar degenerative disc disease, MRI, Machine learning, Microdecompression, Spinal stenosis

General General

Comparison of Machine Learning Algorithms for Classifying Adverse-Event Related 30-Day Hospital Readmissions: Potential Implications for Patient Safety.

In Studies in health technology and informatics ; h5-index 23.0

Studies in the last decade have focused on identifying patients at risk of readmission using predictive models, in an objective to decrease costs to the healthcare system. However, real-time models specifically identifying readmissions related to hospital adverse-events are still to be elaborated. A supervised learning approach was adopted using different machine learning algorithms based on features available directly from the hospital information system and on a validated dataset elaborated by a multidisciplinary expert consensus panel. Accuracy results upon testing were in line with comparable studies, and variable across algorithms, with the highest prediction given by Artificial Neuron Networks. Features importances relative to the prediction were identified, in order to provide better representation and interpretation of results. Such a model can pave the way to predictive models for readmissions related to patient harm, the establishment of a learning platform for clinical quality measurement and improvement, and in some cases for an improved clinical management of readmitted patients.

Saab Antoine, Saikali Melody, Lamy Jean-Baptiste

2020-Jun-26

Readmissions, adverse events, artificial intelligence, classification, machine learning, patient safety

General General

Learning Individualized Treatment Rules with Estimated Translated Inverse Propensity Score

ArXiv Preprint

Randomized controlled trials typically analyze the effectiveness of treatments with the goal of making treatment recommendations for patient subgroups. With the advance of electronic health records, a great variety of data has been collected in clinical practice, enabling the evaluation of treatments and treatment policies based on observational data. In this paper, we focus on learning individualized treatment rules (ITRs) to derive a treatment policy that is expected to generate a better outcome for an individual patient. In our framework, we cast ITRs learning as a contextual bandit problem and minimize the expected risk of the treatment policy. We conduct experiments with the proposed framework both in a simulation study and based on a real-world dataset. In the latter case, we apply our proposed method to learn the optimal ITRs for the administration of intravenous (IV) fluids and vasopressors (VP). Based on various offline evaluation methods, we could show that the policy derived in our framework demonstrates better performance compared to both the physicians and other baselines, including a simple treatment prediction approach. As a long-term goal, our derived policy might eventually lead to better clinical guidelines for the administration of IV and VP.

Zhiliang Wu, Yinchong Yang, Yunpu Ma, Yushan Liu, Rui Zhao, Michael Moor, Volker Tresp

2020-07-02

Radiology Radiology

Neuroimage-Based Consciousness Evaluation of Patients with Secondary Doubtful Hydrocephalus Before and After Lumbar Drainage.

In Neuroscience bulletin

Hydrocephalus is often treated with a cerebrospinal fluid shunt (CFS) for excessive amounts of cerebrospinal fluid in the brain. However, it is very difficult to distinguish whether the ventricular enlargement is due to hydrocephalus or other causes, such as brain atrophy after brain damage and surgery. The non-trivial evaluation of the consciousness level, along with a continuous drainage test of the lumbar cistern is thus clinically important before the decision for CFS is made. We studied 32 secondary mild hydrocephalus patients with different consciousness levels, who received T1 and diffusion tensor imaging magnetic resonance scans before and after lumbar cerebrospinal fluid drainage. We applied a novel machine-learning method to find the most discriminative features from the multi-modal neuroimages. Then, we built a regression model to regress the JFK Coma Recovery Scale-Revised (CRS-R) scores to quantify the level of consciousness. The experimental results showed that our method not only approximated the CRS-R scores but also tracked the temporal changes in individual patients. The regression model has high potential for the evaluation of consciousness in clinical practice.

Huo Jiayu, Qi Zengxin, Chen Sen, Wang Qian, Wu Xuehai, Zang Di, Hiromi Tanikawa, Tan Jiaxing, Zhang Lichi, Tang Weijun, Shen Dinggang

2020-Jul-01

Disorder of consciousness, Feature selection, Hydrocephalus, Regression, Structural imaging

General General

Detection of COVID-19 Infection from Routine Blood Exams with Machine Learning: A Feasibility Study.

In Journal of medical systems ; h5-index 48.0

The COVID-19 pandemia due to the SARS-CoV-2 coronavirus, in its first 4 months since its outbreak, has to date reached more than 200 countries worldwide with more than 2 million confirmed cases (probably a much higher number of infected), and almost 200,000 deaths. Amplification of viral RNA by (real time) reverse transcription polymerase chain reaction (rRT-PCR) is the current gold standard test for confirmation of infection, although it presents known shortcomings: long turnaround times (3-4 hours to generate results), potential shortage of reagents, false-negative rates as large as 15-20%, the need for certified laboratories, expensive equipment and trained personnel. Thus there is a need for alternative, faster, less expensive and more accessible tests. We developed two machine learning classification models using hematochemical values from routine blood exams (namely: white blood cells counts, and the platelets, CRP, AST, ALT, GGT, ALP, LDH plasma levels) drawn from 279 patients who, after being admitted to the San Raffaele Hospital (Milan, Italy) emergency-room with COVID-19 symptoms, were screened with the rRT-PCR test performed on respiratory tract specimens. Of these patients, 177 resulted positive, whereas 102 received a negative response. We have developed two machine learning models, to discriminate between patients who are either positive or negative to the SARS-CoV-2: their accuracy ranges between 82% and 86%, and sensitivity between 92% e 95%, so comparably well with respect to the gold standard. We also developed an interpretable Decision Tree model as a simple decision aid for clinician interpreting blood tests (even off-line) for COVID-19 suspect cases. This study demonstrated the feasibility and clinical soundness of using blood tests analysis and machine learning as an alternative to rRT-PCR for identifying COVID-19 positive patients. This is especially useful in those countries, like developing ones, suffering from shortages of rRT-PCR reagents and specialized laboratories. We made available a Web-based tool for clinical reference and evaluation (This tool is available at https://covid19-blood-ml.herokuapp.com/ ).

Brinati Davide, Campagner Andrea, Ferrari Davide, Locatelli Massimo, Banfi Giuseppe, Cabitza Federico

2020-Jul-01

Blood tests, COVID-19, Machine learning, RT-PCR test, Random forest, Three-way

General General

A study of MRI-based radiomics biomarkers for sacroiliitis and spondyloarthritis.

In International journal of computer assisted radiology and surgery

PURPOSE : To evaluate the performance of texture-based biomarkers by radiomic analysis using magnetic resonance imaging (MRI) of patients with sacroiliitis secondary to spondyloarthritis (SpA).

RELEVANCE : The determination of sacroiliac joints inflammatory activity supports the drug management in these diseases.

METHODS : Sacroiliac joints (SIJ) MRI examinations of 47 patients were evaluated. Thirty-seven patients had SpA diagnoses (27 axial SpA and ten peripheral SpA) which was established previously after clinical and laboratory follow-up. To perform the analysis, the SIJ MRI was first segmented and warped. Second, radiomics biomarkers were extracted from the warped MRI images for associative analysis with sacroiliitis and the SpA subtypes. Finally, statistical and machine learning methods were applied to assess the associations of the radiomics texture-based biomarkers with clinical outcomes.

RESULTS : All diagnostic performances obtained with individual or combined biomarkers reached areas under the receiver operating characteristic curves ≥ 0.80 regarding SpA related sacroiliitis and and SpA subtypes classification. Radiomics texture-based analysis showed significant differences between the positive and negative SpA groups and differentiated the axial and peripheral subtypes (P < 0.001). In addition, the radiomics analysis was also able to correctly identify the disease even in the absence of active inflammation.

CONCLUSION : We concluded that the application of the radiomic approach constitutes a potential noninvasive tool to aid the diagnosis of sacroiliitis and for SpA subclassifications based on MRI of sacroiliac joints.

Tenório Ariane Priscilla Magalhães, Faleiros Matheus Calil, Junior José Raniery Ferreira, Dalto Vitor Faeda, Assad Rodrigo Luppino, Louzada-Junior Paulo, Yoshida Hiroyuki, Nogueira-Barbosa Marcello Henrique, de Azevedo-Marques Paulo Mazzoncini

2020-Jun-30

Magnetic resonance imaging, Radiomic biomarkers, Sacroiliitis, Spondyloarthritis

General General

Machine learning for the identification of clinically significant prostate cancer on MRI: a meta-analysis.

In European radiology ; h5-index 62.0

OBJECTIVES : The aim of this study was to systematically review the literature and perform a meta-analysis of machine learning (ML) diagnostic accuracy studies focused on clinically significant prostate cancer (csPCa) identification on MRI.

METHODS : Multiple medical databases were systematically searched for studies on ML applications in csPCa identification up to July 31, 2019. Two reviewers screened all papers independently for eligibility. The area under the receiver operating characteristic curves (AUC) was pooled to quantify predictive accuracy. A random-effects model estimated overall effect size while statistical heterogeneity was assessed with the I2 value. A funnel plot was used to investigate publication bias. Subgroup analyses were performed based on reference standard (biopsy or radical prostatectomy) and ML type (deep and non-deep).

RESULTS : After the final revision, 12 studies were included in the analysis. Statistical heterogeneity was high both in overall and in subgroup analyses. The overall pooled AUC for ML in csPCa identification was 0.86, with 0.81-0.91 95% confidence intervals (95%CI). The biopsy subgroup (n = 9) had a pooled AUC of 0.85 (95%CI = 0.79-0.91) while the radical prostatectomy one (n = 3) of 0.88 (95%CI = 0.76-0.99). Deep learning ML (n = 4) had a 0.78 AUC (95%CI = 0.69-0.86) while the remaining 8 had AUC = 0.90 (95%CI = 0.85-0.94).

CONCLUSIONS : ML pipelines using prostate MRI to identify csPCa showed good accuracy and should be further investigated, possibly with better standardisation in design and reporting of results.

KEY POINTS : • Overall pooled AUC was 0.86 with 0.81-0.91 95% confidence intervals. • In the reference standard subgroup analysis, algorithm accuracy was similar with pooled AUCs of 0.85 (0.79-0.91 95% confidence intervals) and 0.88 (0.76-0.99 95% confidence intervals) for studies employing biopsies and radical prostatectomy, respectively. • Deep learning pipelines performed worse (AUC = 0.78, 0.69-0.86 95% confidence intervals) than other approaches (AUC = 0.90, 0.85-0.94 95% confidence intervals).

Cuocolo Renato, Cipullo Maria Brunella, Stanzione Arnaldo, Romeo Valeria, Green Roberta, Cantoni Valeria, Ponsiglione Andrea, Ugga Lorenzo, Imbriaco Massimo

2020-Jun-30

Machine learning, Magnetic resonance imaging, Meta-analysis, Prostatic neoplasms

General General

Predictive Modeling of Pressure Injury Risk in Patients Admitted to an Intensive Care Unit.

In American journal of critical care : an official publication, American Association of Critical-Care Nurses

BACKGROUND : Pressure injuries are an important problem in hospital care. Detecting the population at risk for pressure injuries is the first step in any preventive strategy. Available tools such as the Norton and Braden scales do not take into account all of the relevant risk factors. Data mining and machine learning techniques have the potential to overcome this limitation.

OBJECTIVES : To build a model to detect pressure injury risk in intensive care unit patients and to put the model into production in a real environment.

METHODS : The sample comprised adult patients admitted to an intensive care unit (N = 6694) at University Hospital of Torrevieja and University Hospital of Vinalopó. A retrospective design was used to train (n = 2508) and test (n = 1769) the model and then a prospective design was used to test the model in a real environment (n = 2417). Data mining was used to extract variables from electronic medical records and a predictive model was built with machine learning techniques. The sensitivity, specificity, area under the curve, and accuracy of the model were evaluated.

RESULTS : The final model used logistic regression and incorporated 23 variables. The model had sensitivity of 0.90, specificity of 0.74, and area under the curve of 0.89 during the initial test, and thus it outperformed the Norton scale. The model performed well 1 year later in a real environment.

CONCLUSIONS : The model effectively predicts risk of pressure injury. This allows nurses to focus on patients at high risk for pressure injury without increasing workload.

Ladios-Martin Mireia, Fernández-de-Maya José, Ballesta-López Francisco-Javier, Belso-Garzas Adrián, Mas-Asencio Manuel, Cabañero-Martínez María José

2020-Jul-01

General General

When Does Stand-Alone Software Qualify as A Medical Device in the European Union?-The Cjeu's Decision in Snitem and What it Implies for the Next Generation of Medical Devices.

In Medical law review

This contribution analyses the first decision by the Court of Justice of the European Union (CJEU) on the qualification and regulation of stand-alone software as medical devices. Referring to the facts of the case and the applicable European Union (EU) regulatory framework, the Court specifically found that prescription support software may constitute a medical device. This would even be the case where the software does not act directly in or on the human body. Yet, according to the CJEU, it is necessary that the intended purpose falls within one or more of the 'medical purpose' categories of the regulatory definition of 'medical device'. The case has important implications, not only for specific legal debates, but it also signifies a paradigm shift with a rapidly increasing digitalisation of the health and life sciences. This highlights the demand for continuous debates over the necessary evolution of the regulatory framework applying to the interface of medical artificial intelligence (AI) and Big Data.

Minssen Timo, Mimler Marc, Mak Vivian

2020-Jun-30

Health care, Medical Devices Directive, Medical devices, Software, eHealth

Cardiology Cardiology

Adapting and evaluating a deep learning language model for clinical why-question answering.

In JAMIA open

Objectives : To adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text.

Materials and Methods : Bidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-question answering (why-QA) on clinical notes. The evaluation focused on: (1) comparing the merits from different training data and (2) error analysis.

Results : The best model achieved an accuracy of 0.707 (or 0.760 by partial match). Training toward customization for the clinical language helped increase 6% in accuracy.

Discussion : The error analysis suggested that the model did not really perform deep reasoning and that clinical why-QA might warrant more sophisticated solutions.

Conclusion : The BERT model achieved moderate accuracy in clinical why-QA and should benefit from the rapidly evolving technology. Despite the identified limitations, it could serve as a competent proxy for question-driven clinical information extraction.

Wen Andrew, Elwazir Mohamed Y, Moon Sungrim, Fan Jungwei

2020-Apr

artificial intelligence, clinical decision-making, evaluation studies, natural language processing, question answering

Internal Medicine Internal Medicine

Stigma, biomarkers, and algorithmic bias: recommendations for precision behavioral health with artificial intelligence.

In JAMIA open

Effective implementation of artificial intelligence in behavioral healthcare delivery depends on overcoming challenges that are pronounced in this domain. Self and social stigma contribute to under-reported symptoms, and under-coding worsens ascertainment. Health disparities contribute to algorithmic bias. Lack of reliable biological and clinical markers hinders model development, and model explainability challenges impede trust among users. In this perspective, we describe these challenges and discuss design and implementation recommendations to overcome them in intelligent systems for behavioral and mental health.

Walsh Colin G, Chaudhry Beenish, Dua Prerna, Goodman Kenneth W, Kaplan Bonnie, Kavuluru Ramakanth, Solomonides Anthony, Subbian Vignesh

2020-Apr

artificial intelligence, behavioral health, ethics, health disparities, algorithms, mental health, precision medicine, predictive modeling

Cardiology Cardiology

From Local Explanations to Global Understanding with Explainable AI for Trees.

In Nature machine intelligence

Tree-based machine learning models such as random forests, decision trees, and gradient boosted trees are popular non-linear predictive models, yet comparatively little attention has been paid to explaining their predictions. Here, we improve the interpretability of tree-based models through three main contributions: 1) The first polynomial time algorithm to compute optimal explanations based on game theory. 2) A new type of explanation that directly measures local feature interaction effects. 3) A new set of tools for understanding global model structure based on combining many local explanations of each prediction. We apply these tools to three medical machine learning problems and show how combining many high-quality local explanations allows us to represent global structure while retaining local faithfulness to the original model. These tools enable us to i) identify high magnitude but low frequency non-linear mortality risk factors in the US population, ii) highlight distinct population sub-groups with shared risk characteristics, iii) identify non-linear interaction effects among risk factors for chronic kidney disease, and iv) monitor a machine learning model deployed in a hospital by identifying which features are degrading the model's performance over time. Given the popularity of tree-based machine learning models, these improvements to their interpretability have implications across a broad set of domains.

Lundberg Scott M, Erion Gabriel, Chen Hugh, DeGrave Alex, Prutkin Jordan M, Nair Bala, Katz Ronit, Himmelfarb Jonathan, Bansal Nisha, Lee Su-In

2020-Jan

General General

Logistic regression has similar performance to optimised machine learning algorithms in a clinical setting: application to the discrimination between type 1 and type 2 diabetes in young adults.

In Diagnostic and prognostic research

Background : There is much interest in the use of prognostic and diagnostic prediction models in all areas of clinical medicine. The use of machine learning to improve prognostic and diagnostic accuracy in this area has been increasing at the expense of classic statistical models. Previous studies have compared performance between these two approaches but their findings are inconsistent and many have limitations. We aimed to compare the discrimination and calibration of seven models built using logistic regression and optimised machine learning algorithms in a clinical setting, where the number of potential predictors is often limited, and externally validate the models.

Methods : We trained models using logistic regression and six commonly used machine learning algorithms to predict if a patient diagnosed with diabetes has type 1 diabetes (versus type 2 diabetes). We used seven predictor variables (age, BMI, GADA islet-autoantibodies, sex, total cholesterol, HDL cholesterol and triglyceride) using a UK cohort of adult participants (aged 18-50 years) with clinically diagnosed diabetes recruited from primary and secondary care (n = 960, 14% with type 1 diabetes). Discrimination performance (ROC AUC), calibration and decision curve analysis of each approach was compared in a separate external validation dataset (n = 504, 21% with type 1 diabetes).

Results : Average performance obtained in internal validation was similar in all models (ROC AUC ≥ 0.94). In external validation, there were very modest reductions in discrimination with AUC ROC remaining ≥ 0.93 for all methods. Logistic regression had the numerically highest value in external validation (ROC AUC 0.95). Logistic regression had good performance in terms of calibration and decision curve analysis. Neural network and gradient boosting machine had the best calibration performance. Both logistic regression and support vector machine had good decision curve analysis for clinical useful threshold probabilities.

Conclusion : Logistic regression performed as well as optimised machine algorithms to classify patients with type 1 and type 2 diabetes. This study highlights the utility of comparing traditional regression modelling to machine learning, particularly when using a small number of well understood, strong predictor variables.

Lynam Anita L, Dennis John M, Owen Katharine R, Oram Richard A, Jones Angus G, Shields Beverley M, Ferrat Lauric A

2020

Logistic regression, Machine learning, Model selection