Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Category articles

General General

Hybrid support vector machine optimization model for inversion of tunnel transient electromagnetic method.

In Mathematical biosciences and engineering : MBE

The transient electromagnetic method (TEM) can effectively predict adverse geological conditions, and is widely used in underground engineering fields such as coal mining and tunneling. Accurate evaluation of adverse geological features is a crucial problem that requires urgent solutions. TEM inversion is an essential tool in solving such problems. However, the three-dimensional full-space detection of tunnels and its inversion are not sufficiently developed. Therefore, combining a least-squares support vector machine (LSSVM) with particle swarm optimization (PSO), this paper proposes a tunnel TEM inversion approach. Firstly, the PSO algorithm is adopted to optimize the LSSVM model, thus overcoming the randomness and uncertainty of model parameter selection. An orthogonal test method is adopted to optimize the initial parameter combination of the PSO algorithm, which further improves the accuracy of our PSO-LSSVM model. Numerical simulations are conducted to generate 125 sets of original data. The optimized PSO-LSSVM model is then used to predict certain values of the original data. Finally, the optimization model is compared with conventional machine learning methods, and the results show that the randomness of the initial parameters of the PSO algorithm has been reduced and the optimization effect has been improved. The optimized PSO algorithm further improves the stability and accuracy of the generalization ability of the model. Through a comparison of different machine learning methods and laboratory model tests, it is verified that the optimized PSO-LSSVM model proposed in this paper is an effective technique for tunnel TEM detection inversion.

Liang Xiao, Qi Tai Yue, Jin Zhi Yi, Qian Wang Ping

2020-Jun-01

** hybrid support vector machine , inversion method , particle swarm optimization , transient electromagnetic method **

General General

An improved spotted hyena optimizer for PID parameters in an AVR system.

In Mathematical biosciences and engineering : MBE

In this paper, an improved spotted hyena optimizer (ISHO) with a nonlinear convergence factor is proposed for proportional integral derivative (PID) parameter optimization in an automatic voltage regulator (AVR). In the proposed ISHO, an opposition-based learning strategy is used to initialize the spotted hyena individual's position in the search space, which strengthens the diversity of individuals in the global searching process. A novel nonlinear update equation for the convergence factor is used to enhance the SHO's exploration and exploitation abilities. The experimental results show that the proposed ISHO algorithm performed better than other algorithms in terms of the solution precision and convergence rate.

Zhou Guo, Li Jie, Tang Zhong Hua, Luo Qi Fang, Zhou Yong Quan

2020-May-25

** PID parameter optimization , metaheuristic , nonlinear convergence factor , opposition-based learning , spotted hyena optimizer **

Radiology Radiology

The Value of Quantitative Musculoskeletal Imaging.

In Seminars in musculoskeletal radiology

Musculoskeletal imaging is mainly based on the subjective and qualitative analysis of imaging examinations. However, integration of quantitative assessment of imaging data could increase the value of imaging in both research and clinical practice. Some imaging modalities, such as perfusion magnetic resonance imaging (MRI), diffusion MRI, or T2 mapping, are intrinsically quantitative. But conventional morphological imaging can also be analyzed through the quantification of various parameters. The quantitative data retrieved from imaging examinations can serve as biomarkers and be used to support diagnosis, determine patient prognosis, or monitor therapy.We focus on the value, or clinical utility, of quantitative imaging in the musculoskeletal field. There is currently a trend to move from volume- to value-based payments. This review contains definitions and examines the role that quantitative imaging may play in the implementation of value-based health care. The influence of artificial intelligence on the value of quantitative musculoskeletal imaging is also discussed.

Visser Jacob J, Goergen Stacy K, Klein Stefan, Noguerol Teodoro Martín, Pickhardt Perry J, Fayad Laura M, Omoumi Patrick

2020-Aug

Radiology Radiology

Improving Quantitative Magnetic Resonance Imaging Using Deep Learning.

In Seminars in musculoskeletal radiology

Deep learning methods have shown promising results for accelerating quantitative musculoskeletal (MSK) magnetic resonance imaging (MRI) for T2 and T1ρ relaxometry. These methods have been shown to improve musculoskeletal tissue segmentation on parametric maps, allowing efficient and accurate T2 and T1ρ relaxometry analysis for monitoring and predicting MSK diseases. Deep learning methods have shown promising results for disease detection on quantitative MRI with diagnostic performance superior to conventional machine-learning methods for identifying knee osteoarthritis.

Liu Fang

2020-Aug

General General

Patient-adaptable intracranial pressure morphology analysis using a probabilistic model-based approach.

In Physiological measurement ; h5-index 36.0

OBJECTIVE : We present a framework for analyzing the intracranial pressure (ICP) morphology. Analyzing ICP signals is challenging due to the non-linear and non-Gaussian characteristics of the signal dynamics, inevitable corruption with noise and artifacts, and variations in the ICP pulse morphology among individuals with different neurological conditions. Existing frameworks make unrealistic assumptions regarding ICP dynamics and are not tuned for individual patients.

APPROACH : We propose a dynamic Bayesian network (DBN) for automated detection of three major ICP pulsatile components. The proposed model captures the non-linear and non-Gaussian dynamics of the ICP morphology and further adapts to a patient as the individual's ICP measurements are received. To make the approach more robust, we leverage evidence reversal and present an inference algorithm to obtain the posterior distribution over the locations of pulsatile components.

RESULTS : We evaluate our approach on a dataset with over 700 hours of recordings from 66 neurological patients, where the pulsatile components have been annotated in prior studies. The algorithm obtains an accuracy of 96.56%, 92.39%, and 94.04% for detecting each pulsatile component on the test set, showing significant improvements over existing approaches.

SIGNIFICANCE : Continuous ICP monitoring is essential in guiding the treatment of neurological conditions such as traumatic brain injuries. An automated approach for ICP morphology analysis takes a step toward enhancing patient care with minimal supervision. Compared to previous methods, our framework offers several advantages. It learns the parameters that model each patient's ICP in an unsupervised manner, resulting in an accurate morphology analysis. The Bayesian model-based framework provides uncertainty estimates and reveals interesting facts about ICP dynamics. The framework can be readily applied to replace existing morphological analysis methods and support the application of ICP pulse morphological features to aid the monitoring of pathophysiological changes of relevance to the care of patients with acute brain injuries.

Rashidinejad Paria, Hu Xiao, Russell Stuart

2020-Sep-29

Artificial Intelligence in Healthcare, Dynamic Bayesian Network, ICP, Model-Based Probabilistic Inference, Particle Filter, Patient Monitoring

General General

Deep Learning-Based Approach for the Diagnosis of Moyamoya Disease.

In Journal of stroke and cerebrovascular diseases : the official journal of National Stroke Association

OBJECTIVES : Moyamoya disease is a unique cerebrovascular disorder that is characterized by chronic bilateral stenosis of the internal carotid arteries and by the formation of an abnormal vascular network called moyamoya vessels. In this stury, the authors inspected whether differentiation between patients with moyamoya disease and those with atherosclerotic disease or normal controls might be possible by using deep machine learning technology.

MATERIALS AND METHODS : This study included 84 consecutive patients diagnosed with moyamoya disease at our hospital between April 2009 and July 2016. In each patient, two axial continuous slices of T2-weighed imaging at the level of the basal cistern, basal ganglia, and centrum semiovale were acquired. The image sets were processed by using code written in the programming language Python 3.7. Deep learning with fine tuning developed using VGG16 comprised several layers.

RESULTS : The accuracies of distinguishing between patients with moyamoya disease and those with atherosclerotic disease or controls in the basal cistern, basal ganglia, and centrum semiovale levels were 92.8, 84.8, and 87.8%, respectively.

CONCLUSION : The authors showed excellent results in terms of accuracy of differential diagnosis of moyamoya disease using AI with the conventional T2 weighted images. The authors suggest the possibility of diagnosing moyamoya disease using AI technique and demonstrate the area of interest on which AI focuses while processing magnetic resonance images.

Akiyama Yukinori, Mikami Takeshi, Mikuni Nobuhiro

2020-Sep-25

Artificial intelligence, Deep learning, Diagnostic accuracy, Moyamoya disease

General General

Development and Validation of Machine Learning-Based Prediction for Dependence in the Activities of Daily Living after Stroke Inpatient Rehabilitation: A Decision-Tree Analysis.

In Journal of stroke and cerebrovascular diseases : the official journal of National Stroke Association

BACKGROUND AND PURPOSE : Accurate prediction using simple and changeable variables is clinically meaningful because some known-predictors, such as stroke severity and patients age cannot be modified with rehabilitative treatment. There are limited clinical prediction rules (CPRs) that have been established using only changeable variables to predict the activities of daily living (ADL) dependence of stroke patients. This study aimed to develop and assess the CPRs using machine learning-based methods to identify ADL dependence in stroke patients.

METHODS : In total, 1125 stroke patients were investigated. We used a maintained database of all stroke patients who were admitted to the convalescence rehabilitation ward of our facility. The classification and regression tree (CART) methodology with only the FIM subscores was used to predict the ADL dependence.

RESULTS : The CART method identified FIM transfer (bed, chair, and wheelchair) (score ≤ 4.0 or > 4.0) as the best single discriminator for ADL dependence. Among those with FIM transfer (bed, chair, and wheelchair) score > 4.0, the next best predictor was FIM bathing (score ≤ 2.0 or > 2.0). Among those with FIM transfer (bed, chair, and wheelchair) score ≤ 4.0, the next predictor was FIM transfer toilet (score ≤ 3 or > 3). The accuracy of the CART model was 0.830 (95% confidence interval, 0.804-0.856).

CONCLUSION : Machine learning-based CPRs with moderate predictive ability for the identification of ADL dependence in the stroke patients were developed.

Iwamoto Yuji, Imura Takeshi, Tanaka Ryo, Imada Naoki, Inagawa Tetsuji, Araki Hayato, Araki Osamu

2020-Sep-26

Activities of daily living, Decision-tree analysis, Prediction, Rehabilitation, Stroke

Radiology Radiology

Radiomic Model for Distinguishing Dissecting Aneurysms from Complicated Saccular Aneurysms on high-Resolution Magnetic Resonance Imaging.

In Journal of stroke and cerebrovascular diseases : the official journal of National Stroke Association

OBJECTIVE : To build radiomic model in differentiating dissecting aneurysm (DA) from complicated saccular aneurysm (SA) based on high-resolution magnetic resonance imaging (HR-MRI) through machine-learning algorithm.

METHODS : Overall, 851 radiomic features from 77 cases were retrospectively analyzed, and the ElasticNet algorithm was used to build the radiomic model. A clinico-radiological model using clinical features and conventional MRI findings was also built. An integrated model was then built by incorporating the radiomic model and clinico-radiological model. The diagnostic abilities of these models were evaluated using leave one out cross validation and quantified using the receiver operating characteristic (ROC) analysis. The diagnostic performance of radiologists was also evaluated for comparison.

RESULTS : Five features were used to form the radiomic model, which yielded an area under the ROC curve (AUC) of 0.912 (95 % CI 0.846-0.976), sensitivity of 0.852, and specificity of 0.861. The radiomic model achieved a better diagnostic performance than the clinico-radiological model (AUC=0.743, 95 % CI 0.623-0.862), integrated model (AUC=0.888, 95 % CI 0.811-0.965), and even many radiologists.

CONCLUSION : Radiomic features derived from HR-MRI can reliably be used to build a radiomic model for effectively differentiating between DA and complicated SA, and it can provide an objective basis for the selection of clinical treatment plan.

Cao Xin, Xia Wei, Tang Ye, Zhang Bo, Yang Jinming, Zeng Yanwei, Geng Daoying, Zhang Jun

2020-Sep-08

Aneurysm, High-resolution magnetic resonance imaging, Machine-learning, Radiomics

General General

Automated diagnostic tool for hypertension using convolutional neural network.

In Computers in biology and medicine

BACKGROUND : Hypertension (HPT) occurs when there is increase in blood pressure (BP) within the arteries, causing the heart to pump harder against a higher afterload to deliver oxygenated blood to other parts of the body.

PURPOSE : Due to fluctuation in BP, 24-h ambulatory blood pressure monitoring has emerged as a useful tool for diagnosing HPT but is limited by its inconvenience. So, an automatic diagnostic tool using electrocardiogram (ECG) signals is used in this study to detect HPT automatically.

METHOD : The pre-processed signals are fed to a convolutional neural network model. The model learns and identifies unique ECG signatures for classification of normal and hypertension ECG signals. The proposed model is evaluated by the 10-fold and leave one out patient based validation techniques.

RESULTS : A high classification accuracy of 99.99% is achieved for both validation techniques. This is one of the first few studies to have employed deep learning algorithm coupled with ECG signals for the detection of HPT. Our results imply that the developed tool is useful in a hospital setting as an automated diagnostic tool, enabling the effortless detection of HPT using ECG signals.

Soh Desmond Chuang Kiat, Ng E Y K, Jahmunah V, Oh Shu Lih, Tan Ru San, Acharya U Rajendra

2020-Sep-17

10-Fold validation, Automated diagnostic tool, Convolutional neural network, Hypertension, Leave one patient out validation, Masked hypertension

General General

Improving the performance of CNN to predict the likelihood of COVID-19 using chest X-ray images with preprocessing algorithms.

In International journal of medical informatics ; h5-index 49.0

OBJECTIVE : This study aims to develop and test a new computer-aided diagnosis (CAD) scheme of chest X-ray images to detect coronavirus (COVID-19) infected pneumonia.

METHOD : CAD scheme first applies two image preprocessing steps to remove the majority of diaphragm regions, process the original image using a histogram equalization algorithm, and a bilateral low-pass filter. Then, the original image and two filtered images are used to form a pseudo color image. This image is fed into three input channels of a transfer learning-based convolutional neural network (CNN) model to classify chest X-ray images into 3 classes of COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) cases. To build and test the CNN model, a publicly available dataset involving 8474 chest X-ray images is used, which includes 415, 5179 and 2,880 cases in three classes, respectively. Dataset is randomly divided into 3 subsets namely, training, validation, and testing with respect to the same frequency of cases in each class to train and test the CNN model.

RESULTS : The CNN-based CAD scheme yields an overall accuracy of 94.5 % (2404/2544) with a 95 % confidence interval of [0.93,0.96] in classifying 3 classes. CAD also yields 98.4 % sensitivity (124/126) and 98.0 % specificity (2371/2418) in classifying cases with and without COVID-19 infection. However, without using two preprocessing steps, CAD yields a lower classification accuracy of 88.0 % (2239/2544).

CONCLUSION : This study demonstrates that adding two image preprocessing steps and generating a pseudo color image plays an important role in developing a deep learning CAD scheme of chest X-ray images to improve accuracy in detecting COVID-19 infected pneumonia.

Heidari Morteza, Mirniaharikandehei Seyedehnafiseh, Khuzani Abolfazl Zargari, Danala Gopichandh, Qiu Yuchen, Zheng Bin

2020-Sep-23

COVID-19 diagnosis, Computer-aided diagnosis, Convolution neural network (CNN), Coronavirus, Disease classification, VGG16 network

oncology Oncology

A tutorial review of mathematical techniques for quantifying tumor heterogeneity.

In Mathematical biosciences and engineering : MBE

Intra-tumor and inter-patient heterogeneity are two challenges in developing mathematical models for precision medicine diagnostics. Here we review several techniques that can be used to aid the mathematical modeller in inferring and quantifying both sources of heterogeneity from patient data. These techniques include virtual populations, nonlinear mixed effects modeling, non-parametric estimation, Bayesian techniques, and machine learning. We create simulated virtual populations in this study and then apply the four remaining methods to these datasets to highlight the strengths and weak-nesses of each technique. We provide all code used in this review at https://github.com/jtnardin/Tumor-Heterogeneity/ so that this study may serve as a tutorial for the mathematical modelling community. This review article was a product of a Tumor Heterogeneity Working Group as part of the 2018-2019 Program on Statistical, Mathematical, and Computational Methods for Precision Medicine which took place at the Statistical and Applied Mathematical Sciences Institute.

Everett Rebecca, Flores Kevin B, Henscheid Nick, Lagergren John, Larripa Kamila, Li Ding, Nardini John T, Nguyen Phuong T T, Pitman E Bruce, Rutter Erica M

2020-May-19

** Bayesian estimation , cancer heterogeneity , generative adversarial networks , glioblastoma multiforme , machine learning , mathematical oncology , non-parametric estimation , nonlinear mixed effects , spatiotemporal data , tumor growth , variational autoencoders , virtual populations **

Radiology Radiology

Pattern analysis of glucose metabolic brain data for lateralization of MRI-negative temporal lobe epilepsy.

In Epilepsy research

In this paper, we assessed the reliability of glucose metabolic brain data for identifying lateralization of magnetic resonance imaging (MRI)-negative temporal lobe epilepsy (TLE) patients. We designed and developed an efficacious and automatic metabolic-wise lateralization framework. The proposed lateralization framework comprises three main systematic levels. In the first stage of our investigation, we pre-processed interictal fluorodeoxyglucose positron emission tomography images to extract glucose metabolic brain data. In the second stage, we used a voxel selection method involving a feature-ranking strategy to select the most discriminative metabolic voxels. Finally, we used a support vector machine followed by a 10-fold cross-validation strategy to assess the proposed lateralization framework in 27 patients with right MRI-negative TLE and 29 patients with left MRI-negative TLE. The proposed lateralization framework achieved an excellent accuracy of 96.43 % concordance with experienced PET interpreter. Thus, we show that pattern analysis of glucose metabolic brain data can accurately lateralize MRI-negative TLE patients in the clinical setting.

Beheshti Iman, Sone Daichi, Maikusa Norihide, Kimura Yukio, Shigemoto Yoko, Sato Noriko, Matsuda Hiroshi

2020-Sep-22

Brain metabolism, Epilepsy, FDG-PET, Lateralization, Machine learning

Public Health Public Health

From discourse to pathology: Automatic identification of Parkinson's disease patients via morphological measures across three languages.

In Cortex; a journal devoted to the study of the nervous system and behavior

Embodied cognition research on Parkinson's disease (PD) points to disruptions of frontostriatal language functions as sensitive targets for clinical assessment. However, no existing approach has been tested for crosslinguistic validity, let alone by combining naturalistic tasks with machine-learning tools. To address these issues, we conducted the first classifier-based examination of morphological processing (a core frontostriatal function) in spontaneous monologues from PD patients across three typologically different languages. The study comprised 330 participants, encompassing speakers of Spanish (61 patients, 57 matched controls), German (88 patients, 88 matched controls), and Czech (20 patients, 16 matched controls). All subjects described the activities they perform during a regular day, and their monologues were automatically coded via morphological tagging, a computerized method that labels each word with a part-of-speech tag (e.g., noun, verb) and specific morphological tags (e.g., person, gender, number, tense). The ensuing data were subjected to machine-learning analyses to assess whether differential morphological patterns could classify between patients and controls and reflect the former's degree of motor impairment. Results showed robust classification rates, with over 80% of patients being discriminated from controls in each language separately. Moreover, the most discriminative morphological features were associated with the patients' motor compromise (as indicated by Pearson r correlations between predicted and collected motor impairment scores that ranged from moderate to moderate-to-strong across languages). Taken together, our results suggest that morphological patterning, an embodied frontostriatal domain, may be distinctively affected in PD across languages and even under ecological testing conditions.

Eyigoz Elif, Courson Melody, Sedeño Lucas, Rogg Katharina, Orozco-Arroyave Juan Rafael, Nöth Elmar, Skodda Sabine, Trujillo Natalia, Rodríguez Mabel, Rusz Jan, Muñoz Edinson, Cardona Juan F, Herrera Eduar, Hesse Eugenia, Ibáñez Agustín, Cecchi Guillermo, García Adolfo M

2020-Sep-08

Automated speech analysis, Cross-linguistic validity, Linguistic assessments, Morphology, “Parkinsons disease”

oncology Oncology

Clinical evaluation of atlas- and deep learning-based automatic segmentation of multiple organs and clinical target volumes for breast cancer.

In Radiotherapy and oncology : journal of the European Society for Therapeutic Radiology and Oncology

Manual segmentation is the gold standard method for radiation therapy planning; however, it is time-consuming and prone to inter- and intra-observer variation, giving rise to interests in auto-segmentation methods. We evaluated the feasibility of deep learning-based auto-segmentation (DLBAS) in comparison to commercially available atlas-based segmentation solutions (ABAS) for breast cancer radiation therapy. This study used contrast-enhanced planning computed tomography scans from 62 patients with breast cancer who underwent breast-conservation surgery. Contours of target volumes (CTVs), organs, and heart substructures were generated using two commercial ABAS solutions and DLBAS using fully convolutional DenseNet. The accuracy of the segmentation was assessed using 14 test patients using the Dice Similarity Coefficient and Hausdorff Distance referencing the expert contours. A sensitivity analysis was performed using non-contrast planning CT from 14 additional patients. Compared to ABAS, the proposed DLBAS model yielded more consistent results and the highest average Dice Similarity Coefficient values and lowest Hausdorff Distances, especially for CTVs and the substructures of the heart. ABAS showed limited performance in soft-tissue-based regions, such as the esophagus, cardiac arteries, and smaller CTVs. The results of sensitivity analysis between contrast and non-contrast CT test sets showed little difference in the performance of DLBAS and conversely, a large discrepancy for ABAS. The proposed DLBAS algorithm was more consistent and robust in its performance than ABAS across the majority of structures when examining both CTVs and normal organs. DLBAS has great potential to aid a key process in the radiation therapy workflow, helping optimise and reduce the clinical workload.

Choi Min Seo, Choi Byeong Su, Chung Seung Yeun, Kim Nalee, Chun Jaehee, Kim Yong Bae, Chang Jee Suk, Kim Jin Sung

2020-Sep-26

Artificial Intelligence, Breast cancer, Clinical Target Volume, Commercial Atlas-based autosegmentation, Deep learning-based autosegmentation, Organs at risk, Radiation Therapy

Surgery Surgery

Digital pattern recognition for the identification and classification of hypospadias using artificial intelligence vs. experienced pediatric urologist.

In Urology ; h5-index 45.0

OBJECTIVE : To improve hypospadias classification system, we hereby, show the use of machine learning/image recognition to increase objectivity of hypospadias recognition and classification. Hypospadias anatomical variables such as meatal location, quality of urethral plate, glans size and ventral curvature have been identified as predictors for post-operative outcomes but there is still significant subjectivity between evaluators.

MATERIALS AND METHODS : A hypospadias image database with 1169 anonymized images (837 distal and 332 proximal) was used. Images were standardized (ventral aspect of the penis including the glans, shaft and scrotum) and classified into distal or proximal and uploaded for training with TensorFlow®. Data from the training was outputted to TensorBoard, to assess for the loss function. The model was then run on a set of 29 "Test" images randomly selected. Same set of images were distributed amongst expert clinicians in pediatric urology. Inter and intrarater analysis were performed using Fleiss Kappa statistical analysis using the same 29 images shown to the algorithm.

RESULTS : After training with 627 images, detection accuracy was 60%. With1169 images, accuracy increased to 90%. Inter-rater analysis amongst expert pediatric urologists was k= 0.86 and intra-rater 0.74. Image recognition model emulates the almost perfect inter-rater agreement between experts.

CONCLUSION : Our model emulates expert human classification of patients with distal/proximal hypospadias. Future applicability will be on standardizing the use of these technologies and their clinical applicability. The ability of using variables different than only anatomical will feed deep learning algorithms and possibly better assessments and predictions for surgical outcomes.

Fernandez Nicolas, Lorenzo Armando J, Rickard Mandy, Chua Michael, Pippi-Salle Joao L, Perez Jaime, Braga Luis H, Matava Clyde

2020-Sep-26

artificial intelligence, classification system, hypospadias, machine learning, penile curvature, prognosis

Dermatology Dermatology

High-Resolution Mapping of Multiway Enhancer-Promoter Interactions Regulating Pathogen Detection.

In Molecular cell ; h5-index 132.0

Eukaryotic gene expression regulation involves thousands of distal regulatory elements. Understanding the quantitative contribution of individual enhancers to gene expression is critical for assessing the role of disease-associated genetic risk variants. Yet, we lack the ability to accurately link genes with their distal regulatory elements. To address this, we used 3D enhancer-promoter (E-P) associations identified using split-pool recognition of interactions by tag extension (SPRITE) to build a predictive model of gene expression. Our model dramatically outperforms models using genomic proximity and can be used to determine the quantitative impact of enhancer loss on gene expression in different genetic backgrounds. We show that genes that form stable E-P hubs have less cell-to-cell variability in gene expression. Finally, we identified transcription factors that regulate stimulation-dependent E-P interactions. Together, our results provide a framework for understanding quantitative contributions of E-P interactions and associated genetic variants to gene expression.

Vangala Pranitha, Murphy Rachel, Quinodoz Sofia A, Gellatly Kyle, McDonel Patrick, Guttman Mitchell, Garber Manuel

2020-Sep-19

chromosome conformation, cis-regulatory elements, dendritic cells, enhancers, genetic variation, innate immunity, machine learning, multiway promoter interactions, single cell, single molecule

General General

Temporal Differential Expression of Physiomarkers Predicts Sepsis in Critically Ill Adults.

In Shock (Augusta, Ga.)

BACKGROUND : Sepsis is a life-threatening condition with high mortality rates. Early detection and treatment are critical to improving outcomes. Our primary objective was to develop artificial intelligence capable of predicting sepsis earlier using a minimal set of streaming physiological data in real-time.

METHODS AND FINDINGS : A total of 29,552 adult patients were admitted to the intensive care unit across five regional hospitals in Memphis, TN over 18 months from January 2017 to July 2018. From these, 5,958 patients were selected after filtering for continuous (minute-by-minute) physiological data availability. A total of 617 (10.4%) patients were identified as sepsis cases, using the Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) criteria. Physiomarkers, a set of signal processing features, were derived from five physiological data streams including heart rate, respiratory rate, and blood pressure (systolic, diastolic, and mean), captured every minute from the bedside monitors. A support vector machine (SVM) classifier was used for classification. The model accurately predicted sepsis up to a mean and 95% confidence interval of 17.4 ± 0.22 hours before sepsis onset, with an average test accuracy of 83.0% (average sensitivity, specificity, and area under the receiver operating characteristics curve of 0.757, 0.902, and 0.781, respectively).

CONCLUSIONS : This study demonstrates that salient physiomarkers derived from continuous bedside monitoring are temporally and differentially expressed in septic patients. Using this information, minimalistic artificial intelligence models can be developed to predict sepsis earlier in critically ill patients.

Mohammed Akram, Van Wyk Franco, Chinthala Lokesh K, Khojandi Anahita, Davis Robert L, Coopersmith Craig M, Kamaleswaran Rishikesan

2020-Sep-28

General General

Which Sounds Better: Analog or Digital Psychiatry?

In Bipolar disorders

Mental disorders are highly prevalent, heterogeneous conditions comorbid with multiple chronic physical illnesses, remaining the leading cause of disability worldwide.1 Its diagnosis and management are limited by the absence of available biomarkers and have largely been dependent on patient's subjective self-reporting obtained at periodic clinician's evaluation that are frequently influenced by recall bias, decreased illness insight, and differences in clinical assessment experience.2 In-person visits are poorly suited for the diagnosis and treatment of mental illnesses, yielding only cross-sectional measurements of continuous, fluctuating parameters such as mood/emotions, cognition, sleep, blood pressure, and physical/social activities. Also, no continuous monitoring of those subtle features is usually possible between visits.

Dargél Aroldo A

2020-Sep-29

Algorithms, Artificial Intelligence, Cyber-Semiology, Digital Music, Digital Phenotyping, Digital Psychiatry, Machine Learning, Mobile apps, Smartphone sensors, mHealth

oncology Oncology

Automatic segmentation and applicator reconstruction for CT-based brachytherapy of cervical cancer using 3D convolutional neural networks.

In Journal of applied clinical medical physics ; h5-index 28.0

In this study, we present deep learning-based approaches to automatic segmentation and applicator reconstruction with high accuracy and efficiency in the planning computed tomography (CT) for cervical cancer brachytherapy (BT). A novel three-dimensional (3D) convolutional neural network (CNN) architecture was proposed and referred to as DSD-UNET. The dataset of 91 patients received CT-based BT of cervical cancer was used to train and test DSD-UNET model for auto-segmentation of high-risk clinical target volume (HR-CTV) and organs at risk (OARs). Automatic applicator reconstruction was achieved with DSD-UNET-based segmentation of applicator components followed by 3D skeletonization and polynomial curve fitting. Digitization of the channel paths for tandem and ovoid applicator in the planning CT was evaluated utilizing the data from 32 patients. Dice similarity coefficient (DSC), Jaccard Index (JI), and Hausdorff distance (HD) were used to quantitatively evaluate the accuracy. The segmentation performance of DSD-UNET was compared with that of 3D U-Net. Results showed that DSD-UNET method outperformed 3D U-Net on segmentations of all the structures. The mean DSC values of DSD-UNET method were 86.9%, 82.9%, and 82.1% for bladder, HR-CTV, and rectum, respectively. For the performance of automatic applicator reconstruction, outstanding segmentation accuracy was first achieved for the intrauterine and ovoid tubes (average DSC value of 92.1%, average HD value of 2.3 mm). Finally, HDs between the channel paths determined automatically and manually were 0.88 ± 0.12 mm, 0.95 ± 0.16 mm, and 0.96 ± 0.15 mm for the intrauterine, left ovoid, and right ovoid tubes, respectively. The proposed DSD-UNET method outperformed the 3D U-Net and could segment HR-CTV, bladder, and rectum with relatively good accuracy. Accurate digitization of the channel paths could be achieved with the DSD-UNET-based method. The proposed approaches could be useful to improve the efficiency and consistency of treatment planning for cervical cancer BT.

Zhang Daguang, Yang Zhiyong, Jiang Shan, Zhou Zeyang, Meng Maobin, Wang Wei

2020-Sep-29

automatic segmentation, brachytherapy, cervical cancer, convolutional neural networks

Dermatology Dermatology

A machine learning-based, decision support, mobile phone application for diagnosis of common dermatological diseases.

In Journal of the European Academy of Dermatology and Venereology : JEADV

BACKGROUND : The integration of machine learning algorithms in decision support tools for physicians is gaining popularity. These tools can tackle the disparities in healthcare access as the technology can be implemented on smartphones. We present the first, large-scale study on patients with skin-of-color, in which the feasibility of a novel mobile health application (mHealth app) was investigated in actual clinical workflows.

OBJECTIVE : To develop a mHealth app to diagnose 40 common skin diseases and test it in clinical settings.

METHODS : A convolutional neural network-based algorithm was trained with clinical images of 40 skin diseases. A smartphone app was generated and validated on 5,014 patients, attending rural and urban outpatient dermatology departments in India. The results of this mHealth app were compared against the dermatologists' diagnoses.

RESULTS : The machine-learning model, in an in silico validation study, demonstrated an overall top-1 accuracy of 76.93±0.88% and mean area-under-curve of 0.95±0.02 on a set of clinical images. In the clinical study, on patients with skin of color, the app achieved an overall top-1 accuracy of 75.07% (95% CI=73.75-76.36), top-3 accuracy of 89.62% (95% CI=88.67-90.52) and mean area-under-curve of 0.90±0.07.

CONCLUSION : This study underscores the utility of artificial intelligence-driven smartphone applications as a point-of-care, clinical decision support tool for dermatological diagnosis for a wide spectrum of skin diseases in patients of the skin of color.

Pangti Rashi, Mathur Jyoti, Chouhan Vikas, Kumar Sharad, Rajput Lavina, Shah Sandesh, Gupta Atula, Dixit Ambika, Dholakia Dhwani, Gupta Sanjeev, Gupta Savera, George Mariam, Sharma Vinod Kumar, Gupta Somesh

2020-Sep-29

Artificial Intelligence, Community Dermatology, Machine Learning, Pattern recognition, mHealth

Radiology Radiology

Artificial Intelligence in Spine Care.

In Clinical spine surgery

Artificial intelligence is an exciting and growing field in medicine to assist in the proper diagnosis of patients. Although the use of artificial intelligence in orthopedics is currently limited, its utility in other fields has been extremely valuable and could be useful in orthopedics, especially spine care. Automated systems have the ability to analyze complex patterns and images, which will allow for enhanced analysis of imaging. Although the potential impact of artificial intelligence integration into spine care is promising, there are several limitations that must be overcome. Our goal is to review current advances that machine learning has been used for in orthopedics, and discuss potential application to spine care in the clinical setting in which there is a need for the development of automated systems.

Gutman Michael J, Schroeder Gregory D, Murphy Hamadi, Flanders Adam E, Vaccaro Alexander R

2020-Sep-25

General General

Fast and effective biomedical named entity recognition using temporal convolutional network with conditional random field.

In Mathematical biosciences and engineering : MBE

Biomedical named entity recognition (Bio-NER) is the prerequisite for mining knowledge from biomedical texts. The state-of-the-art models for Bio-NER are mostly based on bidirectional long short-term memory (BiLSTM) and bidirectional encoder representations from transformers (BERT) models. However, both BiLSTM and BERT models are extremely computationally intensive. To this end, this paper proposes a temporal convolutional network (TCN) with a conditional random field (TCN-CRF) layer for Bio-NER. The model uses TCN to extract features, which are then decoded by the CRF to obtain the final result. We improve the original TCN model by fusing the features extracted by convolution kernel with different sizes to enhance the performance of Bio-NER. We compared our model with five deep learning models on the GENIA and CoNLL-2003 datasets. The experimental results show that our model can achieve comparative performance with much less training time. The implemented code has been made available to the research community.

Sun Guang Xun, Zhou Cheng Jie, Zhao Han Yu, Jin Bo, Gao Zhan

2020-May-12

** biomedical named entity recognition , conditional random field , temporal convolutional network **

General General

Identification of lncRNA Signature Associated With Pan-cancer Prognosis.

In IEEE journal of biomedical and health informatics

Long noncoding RNAs (lncRNAs) have emerged as potential prognostic markers in various human cancers as they participate in many malignant behaviors. However, the value of lncRNAs as prognostic markers among diverse human cancers is still under investigation, and a systematic signature based on these transcripts that related to pan-cancer prognosis has yet to be reported. In this study, we proposed a framework to incorporate statistical power, biological rationale and machine learning models for pan-cancer prognosis analysis. The framework identified a 5-lncRNA signature (ENSG00000206567, PCAT29, ENSG00000257989, LOC388282, and LINC00339) from TCGA training studies (n=1,878). The identified lncRNAs are significantly associated (all P1.48E-11) with overall survival (OS) of the TCGA cohort (n=4,231). The signature stratified the cohort into low- and high-risk groups with significantly distinct survival outcomes (median OS of 9.84 years versus 4.37 years, log-rank P=1.48E-38) and achieved a time-dependent ROC/AUC of 0.66 at 5 years. After routine clinical factors involved, the signature demonstrated a better performance for long-term prognostic estimation (AUC of 0.72). Moreover, the signature was further evaluated on two independent external cohorts (TARGET, n=1,122; CPTAC, n=391; National Cancer Institute) which yielded similar prognostic values (AUC of 0.60 and 0.75; log-rank P=8.6E-09 and P=2.7E-06). An indexing system was developed to map the 5-lncRNA signature to prognoses of pan-cancer patients. In silico functional analysis indicated that the lncRNAs are associated with common biological processes driving human cancers. The five lncRNAs, especially ENSG00000206567, ENSG00000257989 and LOC388282 that never reported before, may serve as viable molecular targets common among diverse cancers.

Bao Guoqing, Xu Ran, Wang Xiuying, Ji Jianxiong, Wang Linlin, Li Wenjie, Zhang Qing, Huang Bin, Chen Anjing, Kong Beihua, Yang Qifeng, Wang Xinyu, Wang Jian, Li Xingang

2020-Sep-29

General General

Spatial Pyramid Pooling with 3D Convolution Improves Lung Cancer Detection.

In IEEE/ACM transactions on computational biology and bioinformatics

Lung cancer is the leading cause of cancer deaths. Low-dose computed tomography (CT) screening has been shown to significantly reduce lung cancer mortality but suffers from a high false positive rate that leads to unnecessary diagnostic procedures. The development of deep learning techniques has the potential to help improve lung cancer screening technology. Here we present the algorithm, DeepScreener, which can predict a patient's cancer status from a volumetric lung CT scan. DeepScreener is based on our model of Spatial Pyramid Pooling, which ranked 16th of 1972 teams (top 1%) in the Data Science Bowl 2017 (DSB2017) competition, evaluated with the challenge datasets. Here we test the algorithm with an independent set of 1449 low-dose CT scans of the National Lung Screening Trial (NLST) cohort, and we find that DeepScreener has consistent performance of high accuracy. Furthermore, by combining Spatial Pyramid Pooling and 3D Convolution, it achieves an AUC of 0.892, surpassing the previous state-of-the-art algorithms using only 3D convolution. The advancement of deep learning algorithms can potentially help improve lung cancer detection with low-dose CT scans.

Causey Jason, Li Keyu, Chen Xianghao, Dong Wei, Walker Karl, Qualls Jake, Stubblefield Jonathan, Moore Jason H, Guan Yuanfang, Huang Xiuzhen

2020-Sep-29

General General

Prediction of Essential Genes in Comparison States Using Machine Learning.

In IEEE/ACM transactions on computational biology and bioinformatics

Identifying essential genes in comparison states (EGS) is vital to understanding cell differentiation, performing drug discovery, and identifying disease causes. Here, we present a machine learning method termed Prediction of Essential Genes in Comparison States (PreEGS). To capture the alteration of the network in comparison states, PreEGS extracts topological and gene expression features of each gene in a five-dimensional vector. PreEGS also recruits a positive sample expansion method to address the problem of unbalanced positive and negative samples, which is often encountered in practical applications. Different classifiers are applied to the simulated datasets, and the PreEGS based on the random forests model (PreEGSRF) was chosen for optimal performance. PreEGSRF was then compared with six other methods, including three machine learning methods, to predict EGS in a specific state. On real datasets with four gene regulatory networks, PreEGSRF predicted five essential genes related to leukemia and five enriched KEGG pathways. Four of the predicted essential genes and all predicted pathways were consistent with previous studies and highly correlated with leukemia. With high prediction accuracy and generalization ability, PreEGSRF is broadly applicable for the discovery of disease-causing genes, driver genes for cell fate decisions, and complex biomarkers of biological systems.

Xie Jiang, Zhao Chang, Sun Jiamin, Li Jiaxin, Yang Fuzhang, Wang Jiao, Nie Qing

2020-Sep-29

General General

Weakly-Supervised Vessel Detection in Ultra-Widefield Fundus Photography Via Iterative Multi-Modal Registration and Learning.

In IEEE transactions on medical imaging ; h5-index 74.0

We propose a deep-learning based annotation-efficient framework for vessel detection in ultra-widefield (UWF) fundus photography (FP) that does not require de novo labeled UWF FP vessel maps. Our approach utilizes concurrently captured UWF fluorescein angiography (FA) images, for which effective deep learning approaches have recently become available, and iterates between a multi-modal registration step and a weakly-supervised learning step. In the registration step, the UWF FA vessel maps detected with a pre-trained deep neural network (DNN) are registered with the UWF FP via parametric chamfer alignment. The warped vessel maps can be used as the tentative training data but inevitably contain incorrect (noisy) labels due to the differences between FA and FP modalities and the errors in the registration. In the learning step, a robust learning method is proposed to train DNNs with noisy labels. The detected FP vessel maps are used for the registration in the following iteration. The registration and the vessel detection benefit from each other and are progressively improved. Once trained, the UWF FP vessel detection DNN from the proposed approach allows FP vessel detection without requiring concurrently captured UWF FA images. We validate the proposed framework on a new UWF FP dataset, PRIME-FP20, and on existing narrow-field FP datasets. Experimental evaluation, using both pixel-wise metrics and the CAL metrics designed to provide better agreement with human assessment, shows that the proposed approach provides accurate vessel detection, without requiring manually labeled UWF FP training data.

Ding Li, Kuriyan Ajay E, Ramchandran Rajeev S, Wykoff Charles C, Sharma Gaurav

2020-Sep-29

Dermatology Dermatology

Progressive Desmoid Tumor: Radiomics Compared With Conventional Response Criteria for Predicting Progression During Systemic Therapy-A Multicenter Study by the French Sarcoma Group.

In AJR. American journal of roentgenology

OBJECTIVE. The response of desmoid tumors (DTs) to chemotherapy is evaluated with Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST 1.1) in daily practice and clinical trials. MRI shows early change in heterogeneity in responding tumors due to a decrease in cellular area and an increase in fibronecrotic content before dimensional response. Heterogeneity can be quantified with radiomics. Our aim was to develop radiomics-based response criteria and to compare their performances with clinical and radiologic response criteria. MATERIALS AND METHODS. Forty-two patients (median age, 38.2 years) were included in this retrospective multicenter study because they presented with progressive DT and had an MRI examination at baseline, which we refer to as "MRI-0," and an early MRI evaluation performed after the first chemotherapy cycle (mean time after first chemotherapy cycle, 3 months [SD, 28 days]), which we refer to as "MRI-1." After signal intensity normalization, voxel size standardization, discretization, and segmentation of DT volume on fat-suppressed contrast-enhanced T1-weighted imaging, 90 baseline and delta 3D radiomics features were extracted. Using cross-validation and least absolute shrinkage and selection operator-penalized Cox regression, a radiomics score was generated. The performances of models based on the radiomics score, modified Response Evaluation Criteria in Solid Tumors, European Association for the Study of the Liver criteria, Cheson criteria, Choi criteria, and revised Choi criteria from MRI-0 to MRI-1 to predict progression-free survival (PFS, as defined by RECIST 1.1) were assessed with the concordance index. The results were adjusted for performance status, tumor volume, prior chemotherapy, current chemotherapy, and β-catenin mutation. RESULTS. There were 10 cases of progression. The radiomics score included four variables. A high score indicated a poor prognosis. The radiomics score independently correlated with PFS (adjusted hazard ratio = 5.60, p = 0.003), and none of the usual response criteria independently correlated with PFS. The prognostic model based on the radiomics score had the highest concordance index (0.84; 95% CI, 0.71-0.96). CONCLUSION. Quantifying early changes in heterogeneity through a dedicated radiomics score could improve response evaluation for patients with DT undergoing chemotherapy.

Crombé Amandine, Kind Michèle, Ray-Coquard Isabelle, Isambert Nicolas, Chevreau Christine, André Thierry, Lebbe Celeste, Cesne Axel Le, Bompas Emmanuelle, Piperno-Neumann Sophie, Saada Esma, Bouhamama Amine, Blay Jean-Yves, Italiano Antoine

2020-Sep-29

MRI, Response Evaluation Criteria in Solid Tumors (RECIST), aggressive, antineoplastic agents, fibromatosis, supervised machine learning

General General

The Coming of Age for Big Data in Systems Radiobiology, an Engineering Perspective.

In Big data

As high-throughput approaches in biological and biomedical research are transforming the life sciences into information-driven disciplines, modern analytics platforms for big data have started to address the needs for efficient and systematic data analysis and interpretation. We observe that radiobiology is following this general trend, with -omics information providing unparalleled depth into the biomolecular mechanisms of radiation response-defined as systems radiobiology. We outline the design of computational frameworks and discuss the analysis of big data in low-dose ionizing radiation (LDIR) responses of the mammalian brain. Following successful examples and best practices of approaches for the analysis of big data in life sciences and health care, we present the needs and requirements for radiation research. Our goal is to raise awareness for the radiobiology community about the new technological possibilities that can capture complex information and execute data analytics on a large scale. The production of large data sets from genome-wide experiments (quantity) and the complexity of radiation research with multidimensional experimental designs (quality) will necessitate the adoption of latest information technologies. The main objective was to translate research results into applied clinical and epidemiological practice and understand the responses of biological tissues to LDIR to define new radiation protection policies. We envisage a future where multidisciplinary teams include data scientists, artificial intelligence experts, DevOps engineers, and of course radiation experts to fulfill the augmented needs of the radiobiology community, accelerate research, and devise new strategies.

Karapiperis Christos, Chasapi Anastasia, Angelis Lefteris, Scouras Zacharias G, Mastroberardino Pier G, Tapio Soile, Atkinson Michael J, Ouzounis Christos A

2020-Sep-29

big data analytics, bioinformatics, biomarker discovery, genomics, low-dose ionizing radiation, network science, radiation protection, systems radiobiology

General General

A Convolutional Neural Network to Perform Object Detection and Identification in Visual Large-Scale Data.

In Big data

In recent years, big data became a hard challenge. Analyzing big data needs a lot of speed precision combination. In this article, we describe a deep learning-based method to deal with big data with a focus on precision and speed. In our case, the data are images that are the hardest type of data to manipulate because of their complex structure that needs a lot of computation power. Besides, we will solve a hard task on images, which is object detection and identification. Thus, every object in the image will be localized and classified according to the range of classes provided by the training data set. To solve this challenge, we propose an approach based on a deep convolutional neural network (CNN). Moreover, CNN is the most used deep learning model in computer vision tasks such as image classification and object recognition because of its power in self-features extraction and provides useful techniques in the prediction of decision-making. Our approach outperforms state-of-the-art models such as R-CNN, Fast R-CNN, Faster R-CNN, and YOLO (you only look once), with 77% of mean average precision on the Pascal_voc 2007 testing data set and a speed of 16.54 FPS using an Nvidia Geforce GTX 960 GPGPU.

Ayachi Riadh, Said Yahia, Atri Mohamed

2020-Sep-29

big data, computer vision, deep learning, image processing

Pathology Pathology

Prediction of the Age at Onset of Spinocerebellar Ataxia Type 3 with Machine Learning.

In Movement disorders : official journal of the Movement Disorder Society

BACKGROUND : In polyglutamine (polyQ) disease, the investigation of the prediction of a patient's age at onset (AAO) facilitates the development of disease-modifying intervention and underpins the delay of disease onset and progression. Few polyQ disease studies have evaluated AAO predicted by machine-learning algorithms and linear regression methods.

OBJECTIVE : The objective of this study was to develop a machine-learning model for AAO prediction in the largest spinocerebellar ataxia type 3/Machado-Joseph disease (SCA3/MJD) population from mainland China.

METHODS : In this observational study, we introduced an innovative approach by systematically comparing the performance of 7 machine-learning algorithms with linear regression to explore AAO prediction in SCA3/MJD using CAG expansions of 10 polyQ-related genes, sex, and parental origin.

RESULTS : Similar prediction performance of testing set and training set in each models were identified and few overfitting of training data was observed. Overall, the machine-learning-based XGBoost model exhibited the most favorable performance in AAO prediction over the traditional linear regression method and other 6 machine-learning algorithms for the training set and testing set. The optimal XGBoost model achieved mean absolute error, root mean square error, and median absolute error of 5.56, 7.13, 4.15 years, respectively, in testing set 1, with mean absolute error (4.78 years), root mean square error (6.31 years), and median absolute error (3.59 years) in testing set 2.

CONCLUSION : Machine-learning algorithms can be used to predict AAO in patients with SCA3/MJD. The optimal XGBoost algorithm can provide a good reference for the establishment and optimization of prediction models for SCA3/MJD or other polyQ diseases. © 2020 International Parkinson and Movement Disorder Society.

Peng Linliu, Chen Zhao, Chen Tiankai, Lei Lijing, Long Zhe, Liu Mingjie, Deng Qi, Yuan Hongyu, Zou Guangdong, Wan Linlin, Wang Chunrong, Peng Huirong, Shi Yuting, Wang Puzhi, Peng Yun, Wang Shang, He Lang, Xie Yue, Tang Zhichao, Wan Na, Gong Yiqing, Hou Xuan, Shen Lu, Xia Kun, Li Jinchen, Chen Chao, Zhang Zuping, Qiu Rong, Tang Beisha, Jiang Hong

2020-Sep-29

spinocerebellar ataxia type 3/Machado-Joseph disease; CAG repeats; age at onset prediction; machine learning

General General

Bruise dating using deep learning.

In Journal of forensic sciences

The bruise dating can have important medicolegal implications in family violence and violence against women cases. However, studies show that the medical specialist has 50% accuracy in classifying a bruise by age, mainly due to the variability of the images and the color of the bruise. This research proposes a model, based on deep convolutional neural networks, for bruise dating using only images, by age ranges, ranging from 0-2 days to 17-30 days, and images of healthy skin. A 2140 experimental bruise photograph dataset was constructed, for which a data capture protocol and a preprocessing procedure are proposed. Similarly, 20 classification models were trained with the Inception V3, Resnet50, MobileNet, and MnasNet architectures, where combinations of learning transfer, cross-validation, and data augmentation were used. Numerical experiments show that classification models based on MnasNet have better results, reaching 97.00% precision and sensitivity, and 99.50% specificity, exceeding 40% precision reported in the literature. Also, it was observed that the precision of the model decreases with the age of the bruise.

Tirado Jhonatan, Mauricio David

2020-Sep-29

MasNet, bruise dating, convolutional neural network, deep learning

Radiology Radiology

Potential use of deep learning techniques for postmortem imaging.

In Forensic science, medicine, and pathology

The use of postmortem computed tomography in forensic medicine, in addition to conventional autopsy, is now a standard procedure in several countries. However, the large number of cases, the large amount of data, and the lack of postmortem radiology experts have pushed researchers to develop solutions that are able to automate diagnosis by applying deep learning techniques to postmortem computed tomography images. While deep learning techniques require a good understanding of image analysis and mathematical optimization, the goal of this review was to provide to the community of postmortem radiology experts the key concepts needed to assess the potential of such techniques and how they could impact their work.

Dobay Akos, Ford Jonathan, Decker Summer, Ampanozi Garyfalia, Franckenberg Sabine, Affolter Raffael, Sieberth Till, Ebert Lars C

2020-Sep-29

Computed tomography, Convolutional neural networks, Deep learning, Forensic sciences, PMCT

General General

A novel semi-supervised multi-view clustering framework for screening Parkinson's disease.

In Mathematical biosciences and engineering : MBE

In recent years, there are many research cases for the diagnosis of Parkinson's disease (PD) with the brain magnetic resonance imaging (MRI) by utilizing the traditional unsupervised machine learning methods and the supervised deep learning models. However, unsupervised learning methods are not good at extracting accurate features among MRIs and it is difficult to collect enough data in the field of PD to satisfy the need of training deep learning models. Moreover, most of the existing studies are based on single-view MRI data, of which data characteristics are not sufficient enough. In this paper, therefore, in order to tackle the drawbacks mentioned above, we propose a novel semi-supervised learning framework called Semi-supervised Multi-view learning Clustering architecture technology (SMC). The model firstly introduces the sliding window method to grasp different features, and then uses the dimensionality reduction algorithms of Linear Discriminant Analysis (LDA) to process the data with different features. Finally, the traditional single-view clustering and multi-view clustering methods are employed on multiple feature views to obtain the results. Experiments show that our proposed method is superior to the state-of-art unsupervised learning models on the clustering effect. As a result, it may be noted that, our work could contribute to improving the effectiveness of identifying PD by previous labeled and subsequent unlabeled medical MRI data in the realistic medical environment.

Zhang Xiao Bo, Zhai Dong Hai, Yang Yan, Zhang Yi Ling, Wang Chun Lin

2020-Apr-30

** Parkinson’s disease (PD) , clustering , dimensionality reduction , feature extraction , semi-supervised learning **

General General

Visual interpretation of [18F]Florbetaben PET supported by deep learning-based estimation of amyloid burden.

In European journal of nuclear medicine and molecular imaging ; h5-index 66.0

PURPOSE : Amyloid PET which has been widely used for noninvasive assessment of cortical amyloid burden is visually interpreted in the clinical setting. As a fast and easy-to-use visual interpretation support system, we analyze whether the deep learning-based end-to-end estimation of amyloid burden improves inter-reader agreement as well as the confidence of the visual reading.

METHODS : A total of 121 clinical routines [18F]Florbetaben PET images were collected for the randomized blind-reader study. The amyloid PET images were visually interpreted by three experts independently blind to other information. The readers qualitatively interpreted images without quantification at the first reading session. After more than 2-week interval, the readers additionally interpreted images with the quantification results provided by the deep learning system. The qualitative assessment was based on a 3-point BAPL score (1: no amyloid load, 2: minor amyloid load, and 3: significant amyloid load). The confidence score for each session was evaluated by a 3-point score (0: ambiguous, 1: probably, and 2: definite to decide).

RESULTS : Inter-reader agreements for the visual reading based on a 3-point scale (BAPL score) calculated by Fleiss kappa coefficients were 0.46 and 0.76 for the visual reading without and with the deep learning system, respectively. For the two reading sessions, the confidence score of visual reading was improved at the visual reading session with the output (1.27 ± 0.078 for visual reading-only session vs. 1.66 ± 0.63 for a visual reading session with the deep learning system).

CONCLUSION : Our results highlight the impact of deep learning-based one-step amyloid burden estimation system on inter-reader agreement and confidence of reading when applied to clinical routine amyloid PET reading.

Kim Ji-Young, Oh Dongkyu, Sung Kiyoung, Choi Hongyoon, Paeng Jin Chul, Cheon Gi Jeong, Kang Keon Wook, Lee Dong Young, Lee Dong Soo

2020-Sep-29

Alzheimer’s disease, Amyloid PET, Deep learning, PET, Visual quantification, [18F]Florbetaben

General General

Beyond abstinence and relapse: cluster analysis of drug-use patterns during treatment as an outcome measure for clinical trials.

In Psychopharmacology

RATIONALE : Many people being treated for opioid use disorder continue to use drugs during treatment. This use occurs in patterns that rarely conform to well-defined cycles of abstinence and relapse. Systematic identification and evaluation of these patterns could enhance analysis of clinical trials and provide insight into drug use.

OBJECTIVES : To evaluate such an approach, we analyzed patterns of opioid and cocaine use from three randomized clinical trials of contingency management in methadone-treated participants.

METHODS : Sequences of drug test results were analyzed with unsupervised machine-learning techniques, including hierarchical clustering of categorical results (i.e., whether any samples were positive during each week) and K-means longitudinal clustering of quantitative results (i.e., the proportion positive each week). The sensitivity of cluster membership as an experimental outcome was assessed based on the effects of contingency management. External validation of clusters was based on drug craving and other symptoms of substance use disorder.

RESULTS : In each clinical trial, we identified four clusters of use patterns, which can be described as opioid use, cocaine use, dual use (opioid and cocaine), and partial/complete abstinence. Different clustering techniques produced substantially similar classifications of individual participants, with strong above-chance agreement. Contingency management increased membership in clusters with lower levels of drug use and fewer symptoms of substance use disorder.

CONCLUSIONS : Cluster analysis provides person-level output that is more interpretable and actionable than traditional outcome measures, providing a concrete answer to the question of what clinicians can tell patients about the success rates of new treatments.

Panlilio Leigh V, Stull Samuel W, Bertz Jeremiah W, Burgess-Hull Albert J, Kowalczyk William J, Phillips Karran A, Epstein David H, Preston Kenzie L

2020-Sep-29

Cluster analysis, Cocaine, Contingency management, Methadone, Opioids, Substance use disorder, Treatment outcomes

General General

TCRdb: a comprehensive database for T-cell receptor sequences with powerful search function.

In Nucleic acids research ; h5-index 217.0

T cells and the T-cell receptor (TCR) repertoire play pivotal roles in immune response and immunotherapy. TCR sequencing (TCR-Seq) technology has enabled accurate profiling TCR repertoire and currently a large number of TCR-Seq data are available in public. Based on the urgent need to effectively re-use these data, we developed TCRdb, a comprehensive human TCR sequences database, by a uniform pipeline to characterize TCR sequences on TCR-Seq data. TCRdb contains more than 277 million highly reliable TCR sequences from over 8265 TCR-Seq samples across hundreds of tissues/clinical conditions/cell types. The unique features of TCRdb include: (i) comprehensive and reliable sequences for TCR repertoire in different samples generated by a strict and uniform pipeline of TCRdb; (ii) powerful search function, allowing users to identify their interested TCR sequences in different conditions; (iii) categorized sample metadata, enabling comparison of TCRs in different sample types; (iv) interactive data visualization charts, describing the TCR repertoire in TCR diversity, length distribution and V-J gene utilization. The TCRdb database is freely available at http://bioinfo.life.hust.edu.cn/TCRdb/ and will be a useful resource in the research and application community of T cell immunology.

Chen Si-Yi, Yue Tao, Lei Qian, Guo An-Yuan

2020-Sep-29

General General

miRNASNP-v3: a comprehensive database for SNPs and disease-related variations in miRNAs and miRNA targets.

In Nucleic acids research ; h5-index 217.0

MicroRNAs (miRNAs) related single-nucleotide variations (SNVs), including single-nucleotide polymorphisms (SNPs) and disease-related variations (DRVs) in miRNAs and miRNA-target binding sites, can affect miRNA functions and/or biogenesis, thus to impact on phenotypes. miRNASNP is a widely used database for miRNA-related SNPs and their effects. Here, we updated it to miRNASNP-v3 (http://bioinfo.life.hust.edu.cn/miRNASNP/) with tremendous number of SNVs and new features, especially the DRVs data. We analyzed the effects of 7 161 741 SNPs and 505 417 DRVs on 1897 pre-miRNAs (2630 mature miRNAs) and 3'UTRs of 18 152 genes. miRNASNP-v3 provides a one-stop resource for miRNA-related SNVs research with the following functions: (i) explore associations between miRNA-related SNPs/DRVs and diseases; (ii) browse the effects of SNPs/DRVs on miRNA-target binding; (iii) functional enrichment analysis of miRNA target gain/loss caused by SNPs/DRVs; (iv) investigate correlations between drug sensitivity and miRNA expression; (v) inquire expression profiles of miRNAs and their targets in cancers; (vi) browse the effects of SNPs/DRVs on pre-miRNA secondary structure changes; and (vii) predict the effects of user-defined variations on miRNA-target binding or pre-miRNA secondary structure. miRNASNP-v3 is a valuable and long-term supported resource in functional variation screening and miRNA function studies.

Liu Chun-Jie, Fu Xin, Xia Mengxuan, Zhang Qiong, Gu Zhifeng, Guo An-Yuan

2020-Sep-29

Radiology Radiology

An extensible big data software architecture managing a research resource of real-world clinical radiology data linked to other health data from the whole Scottish population.

In GigaScience

AIM : To enable a world-leading research dataset of routinely collected clinical images linked to other routinely collected data from the whole Scottish national population. This includes more than 30 million different radiological examinations from a population of 5.4 million and >2 PB of data collected since 2010.

METHODS : Scotland has a central archive of radiological data used to directly provide clinical care to patients. We have developed an architecture and platform to securely extract a copy of those data, link it to other clinical or social datasets, remove personal data to protect privacy, and make the resulting data available to researchers in a controlled Safe Haven environment.

RESULTS : An extensive software platform has been developed to host, extract, and link data from cohorts to answer research questions. The platform has been tested on 5 different test cases and is currently being further enhanced to support 3 exemplar research projects.

CONCLUSIONS : The data available are from a range of radiological modalities and scanner types and were collected under different environmental conditions. These real-world, heterogenous data are valuable for training algorithms to support clinical decision making, especially for deep learning where large data volumes are required. The resource is now available for international research access. The platform and data can support new health research using artificial intelligence and machine learning technologies, as well as enabling discovery science.

Nind Thomas, Sutherland James, McAllister Gordon, Hardy Douglas, Hume Ally, MacLeod Ruairidh, Caldwell Jacqueline, Krueger Susan, Tramma Leandro, Teviotdale Ross, Abdelatif Mohammed, Gillen Kenny, Ward Joe, Scobbie Donald, Baillie Ian, Brooks Andrew, Prodan Bianca, Kerr William, Sloan-Murphy Dominic, Herrera Juan F R, McManus Dan, Morris Carole, Sinclair Carol, Baxter Rob, Parsons Mark, Morris Andrew, Jefferson Emily

2020-Sep-29

AI, Big Data, ML, Radiology

General General

Artificial Intelligence in Healthcare.

In Studies in health technology and informatics ; h5-index 23.0

Modern technology development created significant innovations in delivery of healthcare. Artificial intelligence as "a branch of computer science dealing with the simulation of intelligent behaviour in computers" when applied in health care resulted in intelligent support to decision-making, optimised business processes, increased quality, monitoring and delivering of personalised treatment plans and many other applications. Even though the benefits are clear and numerous, there are still open issues in creating automation of healthcare processes, ensuring data protection and integrity, reduction of medical waste etc. However, due to rapid development of AI techniques, more advances and improvements are still expected.

Ognjanovic Ivana

2020-Sep-25

Artificial intelligence, expert systems, machine learning, natural processing, speech recognition

General General

eHealth and Clinical Documentation Systems.

In Studies in health technology and informatics ; h5-index 23.0

eHealth is the use of modern information and communication technology (ICT) for trans-institutional healthcare purposes. Important subtopics of eHealth are health data sharing and telemedicine. Most of the clinical documentation to be shared is collected in patient records to support patient care. More sophisticated approaches to electronic patient records are trans-institutional or (inter-)national. Other aims for clinical documentation are quality management, reimbursement, legal issues, and medical research. Basic prerequisite for eHealth is interoperability, which can be divided into technical, semantic and process interoperability. There is a variety of international standards to support interoperability. Telemedicine is a subtopic of eHealth, which bridges spatial distance by using ICT for medical (inter-)actions. We distinguish telemedicine among healthcare professionals and telemedicine between health care professionals and patients. Both have a great potential to face the challenges of aging societies, the increasing number of chronically ill patients, multimorbidity and low number of physicians in remote areas. With ongoing digitalization more and more data are available digitally. Clinical documentation is an important source for big data analysis and artificial intelligence. The patient has an important role: Telemonitoring, wearable technologies, and smart home devices provide digital health data from daily life. These are high-quality data which can be used for medical decisions.

Knaup Petra, Benning Nils-Hendrik, Seitz Max Wolfgang, Eisenmann Urs

2020-Sep-25

eHealth, interoperability, medical documentation, patient records, telemedicine

General General

Healthcare Data Analytics.

In Studies in health technology and informatics ; h5-index 23.0

Health analytics is a branch of analysis that focuses on the analysis of complex and large amounts of health data that are characterized by high dimensionality, irregularities and rarities. Their aim is to improve and increase the efficiency of the process of healthcare providers, working with patients, managing costs and resources, improve diagnostic procedures and treatments, etc. The prime focus is investigating historical data and finding templates for different scenarios. As a final product, usually different visualisation tools are produced to support practitioners in patient care to provide better services, and to improve existing procedures.

Ognjanovic Ivana

2020-Sep-25

Health analytics, health data, machine learning, patient similarity, phenotyping, predictive modelling

General General

Using Machine Learning and Smartphone and Smartwatch Data to Detect Emotional States and Transitions: Exploratory Study.

In JMIR mHealth and uHealth

BACKGROUND : Emotional state in everyday life is an essential indicator of health and well-being. However, daily assessment of emotional states largely depends on active self-reports, which are often inconvenient and prone to incomplete information. Automated detection of emotional states and transitions on a daily basis could be an effective solution to this problem. However, the relationship between emotional transitions and everyday context remains to be unexplored.

OBJECTIVE : This study aims to explore the relationship between contextual information and emotional transitions and states to evaluate the feasibility of detecting emotional transitions and states from daily contextual information using machine learning (ML) techniques.

METHODS : This study was conducted on the data of 18 individuals from a publicly available data set called ExtraSensory. Contextual and sensor data were collected using smartphone and smartwatch sensors in a free-living condition, where the number of days for each person varied from 3 to 9. Sensors included an accelerometer, a gyroscope, a compass, location services, a microphone, a phone state indicator, light, temperature, and a barometer. The users self-reported approximately 49 discrete emotions at different intervals via a smartphone app throughout the data collection period. We mapped the 49 reported discrete emotions to the 3 dimensions of the pleasure, arousal, and dominance model and considered 6 emotional states: discordant, pleased, dissuaded, aroused, submissive, and dominant. We built general and personalized models for detecting emotional transitions and states every 5 min. The transition detection problem is a binary classification problem that detects whether a person's emotional state has changed over time, whereas state detection is a multiclass classification problem. In both cases, a wide range of supervised ML algorithms were leveraged, in addition to data preprocessing, feature selection, and data imbalance handling techniques. Finally, an assessment was conducted to shed light on the association between everyday context and emotional states.

RESULTS : This study obtained promising results for emotional state and transition detection. The best area under the receiver operating characteristic (AUROC) curve for emotional state detection reached 60.55% in the general models and an average of 96.33% across personalized models. Despite the highly imbalanced data, the best AUROC curve for emotional transition detection reached 90.5% in the general models and an average of 88.73% across personalized models. In general, feature analyses show that spatiotemporal context, phone state, and motion-related information are the most informative factors for emotional state and transition detection. Our assessment showed that lifestyle has an impact on the predictability of emotion.

CONCLUSIONS : Our results demonstrate a strong association of daily context with emotional states and transitions as well as the feasibility of detecting emotional states and transitions using data from smartphone and smartwatch sensors.

Sultana Madeena, Al-Jefri Majed, Lee Joon

2020-Sep-29

artificial intelligence, digital biomarkers, digital phenotyping, emotion detection, emotional transition detection, mHealth, mental health, mobile phone, spatiotemporal context, supervised machine learning

General General

Development of a Social Network for People Without a Diagnosis (RarePairs): Evaluation Study.

In Journal of medical Internet research ; h5-index 88.0

BACKGROUND : Diagnostic delay in rare disease (RD) is common, occasionally lasting up to more than 20 years. In attempting to reduce it, diagnostic support tools have been studied extensively. However, social platforms have not yet been used for systematic diagnostic support. This paper illustrates the development and prototypic application of a social network using scientifically developed questions to match individuals without a diagnosis.

OBJECTIVE : The study aimed to outline, create, and evaluate a prototype tool (a social network platform named RarePairs), helping patients with undiagnosed RDs to find individuals with similar symptoms. The prototype includes a matching algorithm, bringing together individuals with similar disease burden in the lead-up to diagnosis.

METHODS : We divided our project into 4 phases. In phase 1, we used known data and findings in the literature to understand and specify the context of use. In phase 2, we specified the user requirements. In phase 3, we designed a prototype based on the results of phases 1 and 2, as well as incorporating a state-of-the-art questionnaire with 53 items for recognizing an RD. Lastly, we evaluated this prototype with a data set of 973 questionnaires from individuals suffering from different RDs using 24 distance calculating methods.

RESULTS : Based on a step-by-step construction process, the digital patient platform prototype, RarePairs, was developed. In order to match individuals with similar experiences, it uses answer patterns generated by a specifically designed questionnaire (Q53). A total of 973 questionnaires answered by patients with RDs were used to construct and test an artificial intelligence (AI) algorithm like the k-nearest neighbor search. With this, we found matches for every single one of the 973 records. The cross-validation of those matches showed that the algorithm outperforms random matching significantly. Statistically, for every data set the algorithm found at least one other record (match) with the same diagnosis.

CONCLUSIONS : Diagnostic delay is torturous for patients without a diagnosis. Shortening the delay is important for both doctors and patients. Diagnostic support using AI can be promoted differently. The prototype of the social media platform RarePairs might be a low-threshold patient platform, and proved suitable to match and connect different individuals with comparable symptoms. This exchange promoted through RarePairs might be used to speed up the diagnostic process. Further studies include its evaluation in a prospective setting and implementation of RarePairs as a mobile phone app.

Kühnle Lara, Mücke Urs, Lechner Werner M, Klawonn Frank, Grigull Lorenz

2020-Sep-29

artificial intelligence, diagnostic support tool, machine learning, prototype, rare disease, social network

General General

Achieving better connections between deposited lines in additive manufacturing via machine learning.

In Mathematical biosciences and engineering : MBE

Additive manufacturing is becoming increasingly popular because of its unique advantages, especially fused deposition modelling (FDM) which has been widely used due to its simplicity and comparatively low price. All the process parameters of FDM can be changed to achieve different goals. For example, lower print speed may lead to higher strength of the fabricated parts. While changing these parameters (e.g. print speed, layer height, filament extrusion speed and path distance in a layer), the connection between paths (lines) in a layer will be changed. To achieve the best connection among paths in a real printing process, how these parameters may result in what kind of connection should be studied. In this paper, a machine learning (deep neural network) model is proposed to predict the connection between paths in different process parameters. Four hundred experiments were conducted on an FDM machine to obtain the corresponding connection status data. Among them, there are 280 groups of data that were used to train the machine learning model, while the rest 120 groups of data were used for testing. The results show that this machine learning model can predict the connection status with the accuracy of around 83%. In the future, this model can be used to select the best process parameters in additive manufacturing processes with corresponding objectives.

Jiang Jing Chao, Yu Chun Ling, Xu Xun, Ma Yong Sheng, Liu Ji Kai

2020-Apr-30

** additive manufacturing , connection , deep neural network , machine learning **

Radiology Radiology

Detecting Large Vessel Occlusion at Multiphase CT Angiography by Using a Deep Convolutional Neural Network.

In Radiology ; h5-index 91.0

Background Large vessel occlusion (LVO) stroke is one of the most time-sensitive diagnoses in medicine and requires emergent endovascular therapy to reduce morbidity and mortality. Leveraging recent advances in deep learning may facilitate rapid detection and reduce time to treatment. Purpose To develop a convolutional neural network to detect LVOs at multiphase CT angiography. Materials and Methods This multicenter retrospective study evaluated 540 adults with CT angiography examinations for suspected acute ischemic stroke from February 2017 to June 2018. Examinations positive for LVO (n = 270) were confirmed by catheter angiography and LVO-negative examinations (n = 270) were confirmed through review of clinical and radiology reports. Preprocessing of the CT angiography examinations included vasculature segmentation and the creation of maximum intensity projection images to emphasize the contrast agent-enhanced vasculature. Seven experiments were performed by using combinations of the three phases (arterial, phase 1; peak venous, phase 2; and late venous, phase 3) of the CT angiography. Model performance was evaluated on the held-out test set. Metrics included area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Results The test set included 62 patients (mean age, 69.5 years; 48% women). Single-phase CT angiography achieved an AUC of 0.74 (95% confidence interval [CI]: 0.63, 0.85) with sensitivity of 77% (24 of 31; 95% CI: 59%, 89%) and specificity of 71% (22 of 31; 95% CI: 53%, 84%). Phases 1, 2, and 3 together achieved an AUC of 0.89 (95% CI: 0.81, 0.96), sensitivity of 100% (31 of 31; 95% CI: 99%, 100%), and specificity of 77% (24 of 31; 95% CI: 59%, 89%), a statistically significant improvement relative to single-phase CT angiography (P = .01). Likewise, phases 1 and 3 and phases 2 and 3 also demonstrated improved fit relative to single phase (P = .03). Conclusion This deep learning model was able to detect the presence of large vessel occlusion and its diagnostic performance was enhanced by using delayed phases at multiphase CT angiography examinations. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Ospel and Goyal in this issue.

Stib Matthew T, Vasquez Justin, Dong Mary P, Kim Yun Ho, Subzwari Sumera S, Triedman Harold J, Wang Amy, Wang Hsin-Lei Charlene, Yao Anthony D, Jayaraman Mahesh, Boxerman Jerrold L, Eickhoff Carsten, Cetintemel Ugur, Baird Grayson L, McTaggart Ryan A

2020-Sep-29

Public Health Public Health

Machine learning prediction of the adverse outcome for nontraumatic subarachnoid hemorrhage patients.

In Annals of clinical and translational neurology

OBJECTIVE : Subarachnoid hemorrhage (SAH) is often devastating with increased early mortality, particularly in those with presumed delayed cerebral ischemia (DCI). The ability to accurately predict survival for SAH patients during the hospital course would provide valuable information for healthcare providers, patients, and families. This study aims to utilize electronic health record (EHR) data and machine learning approaches to predict the adverse outcome for nontraumatic SAH adult patients.

METHODS : The cohort included nontraumatic SAH patients treated with vasopressors for presumed DCI from a large EHR database, the Cerner Health Facts® EMR database (2000-2014). The outcome of interest was the adverse outcome, defined as death in hospital or discharged to hospice. Machine learning-based models were developed and primarily assessed by area under the receiver operating characteristic curve (AUC).

RESULTS : A total of 2467 nontraumatic SAH patients (64% female; median age [interquartile range]: 56 [47-66]) who were treated with vasopressors for presumed DCI were included in the study. 934 (38%) patients died or were discharged to hospice. The model achieved an AUC of 0.88 (95% CI, 0.84-0.92) with only the initial 24 h EHR data, and 0.94 (95% CI, 0.92-0.96) after the next 24 h.

INTERPRETATION : EHR data and machine learning models can accurately predict the risk of the adverse outcome for critically ill nontraumatic SAH patients. It is possible to use EHR data and machine learning techniques to help with clinical decision-making.

Yu Duo, Williams George W, Aguilar David, Yamal José-Miguel, Maroufy Vahed, Wang Xueying, Zhang Chenguang, Huang Yuefan, Gu Yuxuan, Talebi Yashar, Wu Hulin

2020-Sep-29

oncology Oncology

Deep learning analysis of the primary tumour and the prediction of lymph node metastases in gastric cancer.

In The British journal of surgery

BACKGROUND : Lymph node metastasis (LNM) in gastric cancer is a prognostic factor and has implications for the extent of lymph node dissection. The lymphatic drainage of the stomach involves multiple nodal stations with different risks of metastases. The aim of this study was to develop a deep learning system for predicting LNMs in multiple nodal stations based on preoperative CT images in patients with gastric cancer.

METHODS : Preoperative CT images from patients who underwent gastrectomy with lymph node dissection at two medical centres were analysed retrospectively. Using a discovery patient cohort, a system of deep convolutional neural networks was developed to predict pathologically confirmed LNMs at 11 regional nodal stations. To gain understanding about the networks' prediction ability, gradient-weighted class activation mapping for visualization was assessed. The performance was tested in an external cohort of patients by analysis of area under the receiver operating characteristic (ROC) curves (AUC), sensitivity and specificity.

RESULTS : The discovery and external cohorts included 1172 and 527 patients respectively. The deep learning system demonstrated excellent prediction accuracy in the external validation cohort, with a median AUC of 0·876 (range 0·856-0·893), sensitivity of 0·743 (0·551-0·859) and specificity of 0·936 (0·672-0·966) for 11 nodal stations. The imaging models substantially outperformed clinicopathological variables for predicting LNMs (median AUC 0·652, range 0·571-0·763). By visualizing nearly 19 000 subnetworks, imaging features related to intratumoral heterogeneity and the invasive front were found to be most useful for predicting LNMs.

CONCLUSION : A deep learning system for the prediction of LNMs was developed based on preoperative CT images of gastric cancer. The models require further validation but may be used to inform prognosis and guide individualized surgical treatment.

Jin C, Jiang Y, Yu H, Wang W, Li B, Chen C, Yuan Q, Hu Y, Xu Y, Zhou Z, Li G, Li R

2020-Sep-29

Radiology Radiology

Tubular gastric adenocarcinoma: Machine learning-based CT texture analysis for predicting lymphovascular and perineural invasion.

In Diagnostic and interventional radiology (Ankara, Turkey)

PURPOSE : Lymphovascular invasion (LVI) and perineural invasion (PNI) are associated with poor prognosis in gastric cancers. In this work, we aimed to investigate the potential role of computed tomography (CT) texture analysis in predicting LVI and PNI in patients with tubular gastric adenocarcinoma (GAC) using a machine learning (ML) approach.

METHODS : Sixty-eight patients who underwent total gastrectomy with curative (R0) resection and D2-lymphadenectomy were included in this retrospective study. Texture features were extracted from the portal venous phase CT images. Dimension reduction was first done with a reproducibility analysis by two radiologists. Then, a feature selection algorithm was used to further reduce the high-dimensionality of the radiomic data. Training and test splits were created with 100 random samplings. ML-based classifications were done using adaptive boosting, k-nearest neighbors, Naive Bayes, neural network, random forest, stochastic gradient descent, support vector machine, and decision tree. Predictive performance of the ML algorithms was mainly evaluated using the mean area under the curve (AUC) metric.

RESULTS : Among 271 texture features, 150 features had excellent reproducibility, which were included in the further feature selection process. Dimension reduction steps yielded five texture features for LVI and five for PNI. Considering all eight ML algorithms, mean AUC and accuracy ranges for predicting LVI were 0.777-0.894 and 76%-81.5%, respectively. For predicting PNI, mean AUC and accuracy ranges were 0.482-0.754 and 54%-68.2%, respectively. The best performances for predicting LVI and PNI were achieved with the random forest and Naive Bayes algorithms, respectively.

CONCLUSION : ML-based CT texture analysis has a potential for predicting LVI and PNI of the tubular GACs. Overall, the method was more successful in predicting LVI than PNI.

Yardımcı Aytül Hande, Koçak Burak, Turan Bektaş Ceyda, Sel İpek, Yarıkkaya Enver, Dursun Nevra, Bektaş Hasan, Usul Afşar Çiğdem, Gürsu Rıza Umar, Kılıçkesmez Özgür

2020-Sep-29

General General

Claims-Based Algorithms for Identifying Patients With Pulmonary Hypertension: A Comparison of Decision Rules and Machine-Learning Approaches.

In Journal of the American Heart Association ; h5-index 70.0

Background Real-world healthcare data are an important resource for epidemiologic research. However, accurate identification of patient cohorts-a crucial first step underpinning the validity of research results-remains a challenge. We developed and evaluated claims-based case ascertainment algorithms for pulmonary hypertension (PH), comparing conventional decision rules with state-of-the-art machine-learning approaches. Methods and Results We analyzed an electronic health record-Medicare linked database from two large academic tertiary care hospitals (years 2007-2013). Electronic health record charts were reviewed to form a gold standard cohort of patients with (n=386) and without PH (n=164). Using health encounter data captured in Medicare claims (including patients' demographics, diagnoses, medications, and procedures), we developed and compared 2 approaches for identifying patients with PH: decision rules and machine-learning algorithms using penalized lasso regression, random forest, and gradient boosting machine. The most optimal rule-based algorithm-having ≥3 PH-related healthcare encounters and having undergone right heart catheterization-attained an area under the receiver operating characteristic curve of 0.64 (sensitivity, 0.75; specificity, 0.48). All 3 machine-learning algorithms outperformed the most optimal rule-based algorithm (P<0.001). A model derived from the random forest algorithm achieved an area under the receiver operating characteristic curve of 0.88 (sensitivity, 0.87; specificity, 0.70), and gradient boosting machine achieved comparable results (area under the receiver operating characteristic curve, 0.85; sensitivity, 0.87; specificity, 0.70). Penalized lasso regression achieved an area under the receiver operating characteristic curve of 0.73 (sensitivity, 0.70; specificity, 0.68). Conclusions Research-grade case identification algorithms for PH can be derived and rigorously validated using machine-learning algorithms. Simple decision rules commonly applied in published literature performed poorly; more complex rule-based algorithms may potentially address the limitation of this approach. PH research using claims data would be considerably strengthened through the use of validated algorithms for cohort ascertainment.

Ong Mei-Sing, Klann Jeffrey G, Lin Kueiyu Joshua, Maron Bradley A, Murphy Shawn N, Natter Marc D, Mandl Kenneth D

2020-Sep-29

computable phenotype, machine learning, pulmonary hypertension

General General

Penalized Least Squares for Structural Equation Modeling with Ordinal Responses.

In Multivariate behavioral research

Statistical modeling with sparsity has become an active research topic in the fields of statistics and machine learning. Because the true sparsity pattern of a model is generally unknown aforehand, it is often explored by a sparse estimation procedure, like least absolute shrinkage and selection operator (lasso). In this study, a penalized least squares (PLS) method for structural equation modeling (SEM) with ordinal data is developed. PLS describes data generation by an underlying response approach, and uses a least squares (LS) fitting function to construct a penalized estimation criterion. A numerical simulation was used to compare PLS with existing penalized likelihood (PL) in terms of averaged mean square error, absolute bias, and the correctness of the model. Based on these empirical findings, a hybrid PLS was also proposed to improve both PL and PLS. The hybrid PLS first chooses an optimal sparsity pattern by PL, then estimates model parameters by an unpenalized LS under the model selected by PL. We also extended PLS to cases of mixed type data and multi-group analysis. All proposed methods could be realized in the R package lslx.

Huang Po-Hsien

2020-Sep-29

Structural equation modeling, factor analysis, lasso, penalized least squares, polychoric correlation

Surgery Surgery

Artificial intelligence in cardiothoracic surgery.

In Minerva cardioangiologica

The tremendous and rapid technological advances that humans have achieved in the last decade have definitely impacted how surgical tasks are performed in the operating room (OR). As a high-tech work environment, the contemporary OR has incorporated novel computational systems into the clinical workflow, aiming to optimize processes and support the surgical team. Artificial intelligence (AI) is increasingly important for surgical decision making to help address diverse sources of information, such as patient risk factors, anatomy, disease natural history, patient values and cost, and assist surgeons and patients to make better predictions regarding the consequences of surgical decisions. In this review, we discuss the current initiatives that are using AI in cardiothoracic surgery and surgical care in general. We also address the future of AI and how high-tech ORs will leverage human-machine teaming to optimize performance and enhance patient safety.

Dias Roger D, Shah Julie A, Zenati Marco A

2020-Sep-29

Public Health Public Health

Factors Affecting the Incidence of Hospitalized Pneumonia after Influenza Infection in Korea Using the National Health Insurance Research Database, 2014-2018: Focusing on the Effect of Antiviral Therapy in the 2017 Flu Season.

In Journal of Korean medical science

BACKGROUND : This study aimed to investigate the effect of antiviral therapy following influenza outpatient episodes on the incidence of hospitalized pneumonia episodes, one of secondary complications of influenza.

METHODS : In the National Health Insurance Research Database, data from July 2013 to June 2018 were used. All of the claim data with diagnoses of influenza and pneumonia were converted to episodes of care after applying 100 days of window period. With the 100-day episodes of care, the characteristics of influenza outpatient episodes and antiviral therapy for influenza, the incidence of hospitalized pneumonia episodes following influenza, and the effect of antiviral therapy for influenza on hospitalized pneumonia episodes were investigated.

RESULTS : The crude incidence rate of hospitalized pneumonia after influenza infection was 0.57% in both males and females. Factors affecting hospitalized pneumonia included age, income level except self-employed highest (only in females), municipality, medical institution type, precedent chronic diseases except hepatitis (only in females) and antiviral therapy. In the 2017 flu season, the relative risk was 0.38 (95% confidence interval [CI], 0.29-0.50) in males aged 0-9 and 0.43 (95% CI, 0.32-0.57) in females aged 0-9 without chronic diseases, and it was 0.51 (95% CI, 0.42-0.61) in males aged 0-9 and 0.42 (95% CI, 0.35-0.50) in females aged 0-9 with one or more chronic diseases in the aspect of the effect of antiviral therapy on pneumonia. It suggests that antiviral therapy may decrease the incidence of pneumonia after influenza infection.

CONCLUSION : After outpatient episode incidence of influenza, antiviral treatment has been shown to reduce the incidence of hospitalized pneumonia, especially in infants and children, during pandemic season 2017. Antiviral therapy for influenza is recommended to minimize burden caused by influenza virus infection and to reduce pneumonia. In addition, medical costs of hospitalization may decrease by antiviral therapy, especially in infants and children.

Byeon Kyeong Hyang, Kim Jaiyong, Choi Bo Youl, Kim Jin Yong, Lee Nakyoung

2020-Sep-28

Antiviral Treatment, Episode of Care, Influenza, Pneumonia

Radiology Radiology

Accelerating T2 mapping of the brain by integrating deep learning priors with low-rank and sparse modeling.

In Magnetic resonance in medicine ; h5-index 66.0

PURPOSE : To accelerate T2 mapping with highly sparse sampling by integrating deep learning image priors with low-rank and sparse modeling.

METHODS : The proposed method achieves high-speed T2 mapping by highly sparsely sampling (k, TE)-space. Image reconstruction from the undersampled data was done by exploiting the low-rank structure and sparsity in the T2 -weighted image sequence and image priors learned from training data. The image priors for a single TE were generated from the public Human Connectome Project data using a tissue-based deep learning method; the image priors were then transferred to other TEs using a generalized series-based method. With these image priors, the proposed reconstruction method used a low-rank model and a sparse model to capture subject-dependent novel features.

RESULTS : The proposed method was evaluated using experimental data obtained from both healthy subjects and tumor patients using a turbo spin-echo sequence. High-quality T2 maps at the resolution of 0.9 × 0.9 × 3.0 mm3 were obtained successfully from highly undersampled data with an acceleration factor of 8. Compared with the existing compressed sensing-based methods, the proposed method produced significantly reduced reconstruction errors. Compared with the deep learning-based methods, the proposed method recovered novel features better.

CONCLUSION : This paper demonstrates the feasibility of learning T2 -weighted image priors for multiple TEs using tissue-based deep learning and generalized series-based learning. A new method was proposed to effectively integrate these image priors with low-rank and sparse modeling to reconstruct high-quality images from highly undersampled data. The proposed method will supplement other acquisition-based methods to achieve high-speed T2 mapping.

Meng Ziyu, Guo Rong, Li Yudu, Guan Yue, Wang Tianyao, Zhao Yibo, Sutton Brad, Li Yao, Liang Zhi-Pei

2020-Sep-29

T2 mapping, deep learning, low-rank modeling, quantitative imaging, sparse modeling

oncology Oncology

A Generative Adversarial Network-Based (GAN-Based) Architecture for Automatic Fiducial Marker Detection in Prostate MRI-Only Radiotherapy Simulation Images.

In Medical physics ; h5-index 59.0

PURPOSE : Clinical sites utilizing MRI-only simulation imaging for prostate radiotherapy planning typically use fiducial markers for pretreatment patient positioning and alignment. Fiducial markers appear as small signal voids in MRI images and are often difficult to discern. Existing clinical methods for fiducial marker localization require multiple MRI sequences and/or manual interaction and specialized expertise. In this study, we develop a robust method for automatic fiducial marker detection in MRI simulation images of the prostate and quantify the prostate organ localization accuracy using automatically detected fiducial markers in MRI for pretreatment alignment using cone beam CT (CBCT) images.

METHODS AND MATERIALS : In this study, a deep learning-based algorithm was used to convert MRI images into labelled fiducial marker volumes. 77 prostate cancer patients who received marker implantation prior to MRI and CT simulation imaging were selected for this study. Multiple-Echo T1 -VIBE MRI images were acquired, and images were stratified (at the patient level) based on the presence of intraprostatic calcifications. Ground truth (GT) contours were defined by an expert on MRI using CT images. Training was done using the pix2pix generative adversarial network (GAN) image-to-image translation package and model testing was done using five-fold cross validation. For performance comparison, an experienced medical dosimetrist and a medical physicist each manually contoured fiducial markers in MRI images. The percent of correct detections and F1 classification scores are reported for markers detected using the automatic detection algorithm and human observers. The patient positioning errors were quantified by calculating the target registration errors (TREs) from fiducial marker driven rigid registration between MRI and CBCT images. TREs were quantified for fiducial marker contours defined on MRI by the automatic detection algorithm and the two expert human observers.

RESULTS : 96% of implanted fiducial markers were correctly identified using the automatic detection algorithm. Two expert raters correctly identified 97% and 96% of fiducial markers, respectively. The F1 classification score was 0.68, 0.75 and 0.72 for the automatic detection algorithm and two human raters, respectively. The main source of false discoveries was intraprostatic calcifications. The mean TRE differences between alignments from automatic detection algorithm and human detected markers and GT were less than 1 mm.

CONCLUSIONS : We have developed a deep learning-based approach to automatically detect fiducial markers in MRI-only simulation images in a clinically representative patient cohort. The automatic detection algorithm-predicted markers can allow for patient setup with similar accuracy to independent human observers.

Singhrao Kamal, Fu Jie, Parikh Neil R, Mikaeilian Argin G, Ruan Dan, Kishan Amar U, Lewis John H

2020-Sep-28

Deep Learning, Fiducial Markers, MRI in treatment planning, MRI-Only Simulation

General General

Robust table recognition for printed document images.

In Mathematical biosciences and engineering : MBE

The recognition and analysis of tables on printed document images is a popular research field of the pattern recognition and image processing. Existing table recognition methods usually require high degree of regularity, and the robustness still needs significant improvement. This paper focuses on a robust table recognition system that mainly consists of three parts: Image preprocessing, cell location based on contour mutual exclusion, and recognition of printed Chinese characters based on deep learning network. A table recognition app has been developed based on these proposed algorithms, which can transform the captured images to editable text in real time. The effectiveness of the table recognition app has been verified by testing a dataset of 105 images. The corresponding test results show that it could well identify high-quality tables, and the recognition rate of low-quality tables with distortion and blur reaches 81%, which is considerably higher than those of the existing methods. The work in this paper could give insights into the application of the table recognition and analysis algorithms.

Liang Qiao Kang, Peng Jian Zhong, Li Zheng Wei, Xie Da Qi, Sun Wei, Wang Yao Nan, Zhang Dan

2020-Apr-23

** binarization algorithm , character recognition , deep learning , recurrent neural network , table image recognition **

General General

Improving the performance of CNN to predict the likelihood of COVID-19 using chest X-ray images with preprocessing algorithms.

In International journal of medical informatics ; h5-index 49.0

OBJECTIVE : This study aims to develop and test a new computer-aided diagnosis (CAD) scheme of chest X-ray images to detect coronavirus (COVID-19) infected pneumonia.

METHOD : CAD scheme first applies two image preprocessing steps to remove the majority of diaphragm regions, process the original image using a histogram equalization algorithm, and a bilateral low-pass filter. Then, the original image and two filtered images are used to form a pseudo color image. This image is fed into three input channels of a transfer learning-based convolutional neural network (CNN) model to classify chest X-ray images into 3 classes of COVID-19 infected pneumonia, other community-acquired no-COVID-19 infected pneumonia, and normal (non-pneumonia) cases. To build and test the CNN model, a publicly available dataset involving 8474 chest X-ray images is used, which includes 415, 5179 and 2,880 cases in three classes, respectively. Dataset is randomly divided into 3 subsets namely, training, validation, and testing with respect to the same frequency of cases in each class to train and test the CNN model.

RESULTS : The CNN-based CAD scheme yields an overall accuracy of 94.5 % (2404/2544) with a 95 % confidence interval of [0.93,0.96] in classifying 3 classes. CAD also yields 98.4 % sensitivity (124/126) and 98.0 % specificity (2371/2418) in classifying cases with and without COVID-19 infection. However, without using two preprocessing steps, CAD yields a lower classification accuracy of 88.0 % (2239/2544).

CONCLUSION : This study demonstrates that adding two image preprocessing steps and generating a pseudo color image plays an important role in developing a deep learning CAD scheme of chest X-ray images to improve accuracy in detecting COVID-19 infected pneumonia.

Heidari Morteza, Mirniaharikandehei Seyedehnafiseh, Khuzani Abolfazl Zargari, Danala Gopichandh, Qiu Yuchen, Zheng Bin

2020-Sep-23

COVID-19 diagnosis, Computer-aided diagnosis, Convolution neural network (CNN), Coronavirus, Disease classification, VGG16 network

Surgery Surgery

Unravelling the effect of data augmentation transformations in polyp segmentation.

In International journal of computer assisted radiology and surgery

PURPOSE : Data augmentation is a common technique to overcome the lack of large annotated databases, a usual situation when applying deep learning to medical imaging problems. Nevertheless, there is no consensus on which transformations to apply for a particular field. This work aims at identifying the effect of different transformations on polyp segmentation using deep learning.

METHODS : A set of transformations and ranges have been selected, considering image-based (width and height shift, rotation, shear, zooming, horizontal and vertical flip and elastic deformation), pixel-based (changes in brightness and contrast) and application-based (specular lights and blurry frames) transformations. A model has been trained under the same conditions without data augmentation transformations (baseline) and for each of the transformation and ranges, using CVC-EndoSceneStill and Kvasir-SEG, independently. Statistical analysis is performed to compare the baseline performance against results of each range of each transformation on the same test set for each dataset.

RESULTS : This basic method identifies the most adequate transformations for each dataset. For CVC-EndoSceneStill, changes in brightness and contrast significantly improve the model performance. On the contrary, Kvasir-SEG benefits to a greater extent from the image-based transformations, especially rotation and shear. Augmentation with synthetic specular lights also improves the performance.

CONCLUSION : Despite being infrequently used, pixel-based transformations show a great potential to improve polyp segmentation in CVC-EndoSceneStill. On the other hand, image-based transformations are more suitable for Kvasir-SEG. Problem-based transformations behave similarly in both datasets. Polyp area, brightness and contrast of the dataset have an influence on these differences.

Sánchez-Peralta Luisa F, Picón Artzai, Sánchez-Margallo Francisco M, Pagador J Blas

2020-Sep-28

Data augmentation, Deep learning, Polyp segmentation, Semantic segmentation, Transformations

General General

FRAX: re-adjust or re-think.

In Archives of osteoporosis

Since its development in 2008, FRAX has booked its place in the standard day to day management of osteoporosis. The FRAX tool has been appreciated for its simplicity and applicability for use in primary care, but criticised for the same reason, as it does not take into account exposure response. To address some of these limitations, relatively simple arithmetic procedures have been proposed to be applied to the conventional FRAX estimates of hip and major fracture probabilities aiming at adjustment of the probability assessment. However, as the list of these adjustments got longer, this has reflected on its implementation in the standard practice and gave FRAX a patchy look. Consequently, raises the need to re-think of the current FRAX and whether a second generation of the tool is required to address the perceived limitations of the original FRAX. This article will discuss both point of views of re-adjustment and re-thinking.

El Miedany Yasser

2020-Sep-28

Adjustment, Artificial intelligence, BMD, Clinical risk factors, FRAX, Fracture probability, Intervention thresholds, Osteoporosis, Risk assessment, Screening

General General

AI in the treatment of fertility: key considerations.

In Journal of assisted reproduction and genetics ; h5-index 39.0

Artificial intelligence (AI) has been proposed as a potential tool to help address many of the existing problems related with empirical or subjective assessments of clinical and embryological decision points during the treatment of infertility. AI technologies are reviewed and potential areas of implementation of algorithms are discussed, highlighting the importance of following a proper path for the development and validation of algorithms, including regulatory requirements, and the need for ecosystems containing enough quality data to generate it. As evidenced by the consensus of a group of experts in fertility if properly developed, it is believed that AI algorithms may help practitioners from around the globe to standardize, automate, and improve IVF outcomes for the benefit of patients. Collaboration is required between AI developers and healthcare professionals to make this happen.

Swain Jason, VerMilyea Matthew Tex, Meseguer Marcos, Ezcurra Diego

2020-Sep-29

AI, algorithms, embryos, fertility

Internal Medicine Internal Medicine

Automated Lung Cancer Detection Using Artificial Intelligence (AI) Deep Convolutional Neural Networks: A Narrative Literature Review.

In Cureus

Lung cancer is the number one cause of cancer-related deaths in the United States as well as worldwide. Radiologists and physicians experience heavy daily workloads, thus are at high risk for burn-out. To alleviate this burden, this narrative literature review compares the performance of four different artificial intelligence (AI) models in lung nodule cancer detection, as well as their performance to physicians/radiologists reading accuracy. A total of 648 articles were selected by two experienced physicians with over 10 years of experience in the fields of pulmonary critical care, and hospital medicine. The data bases used to search and select the articles are PubMed/MEDLINE, EMBASE, Cochrane library, Google Scholar, Web of science, IEEEXplore, and DBLP. The articles selected range from the years between 2008 and 2019. Four out of 648 articles were selected using the following inclusion criteria: 1) 18-65 years old, 2) CT chest scans, 2) lung nodule, 3) lung cancer, 3) deep learning, 4) ensemble and 5) classic methods. The exclusion criteria used in this narrative review include: 1) age greater than 65 years old, 2) positron emission tomography (PET) hybrid scans, 3) chest X-ray (CXR) and 4) genomics. The model performance outcomes metrics are measured and evaluated in sensitivity, specificity, accuracy, receiver operator characteristic (ROC) curve, and the area under the curve (AUC). This hybrid deep-learning model is a state-of-the-art architecture, with high-performance accuracy and low false-positive results. Future studies, comparing each model accuracy at depth is key. Automated physician-assist systems as this model in this review article help preserve a quality doctor-patient relationship.

Sathyakumar Kaviya, Munoz Michael, Singh Jaikaran, Hussain Nowair, Babu Benson A

2020-Aug-25

artificial intelligence, computer-aided detection, convolutional neural networks, deep learning artificial intelligence, deep neural network, ensemble neural network, lung cancer, lung nodule

General General

Leveraging Computational Modeling to Understand Infectious Diseases.

In Current pathobiology reports

Purpose of Review : Computational and mathematical modeling have become a critical part of understanding in-host infectious disease dynamics and predicting effective treatments. In this review, we discuss recent findings pertaining to the biological mechanisms underlying infectious diseases, including etiology, pathogenesis, and the cellular interactions with infectious agents. We present advances in modeling techniques that have led to fundamental disease discoveries and impacted clinical translation.

Recent Findings : Combining mechanistic models and machine learning algorithms has led to improvements in the treatment of Shigella and tuberculosis through the development of novel compounds. Modeling of the epidemic dynamics of malaria at the within-host and between-host level has afforded the development of more effective vaccination and antimalarial therapies. Similarly, in-host and host-host models have supported the development of new HIV treatment modalities and an improved understanding of the immune involvement in influenza. In addition, large-scale transmission models of SARS-CoV-2 have furthered the understanding of coronavirus disease and allowed for rapid policy implementations on travel restrictions and contract tracing apps.

Summary : Computational modeling is now more than ever at the forefront of infectious disease research due to the COVID-19 pandemic. This review highlights how infectious diseases can be better understood by connecting scientists from medicine and molecular biology with those in computer science and applied mathematics.

Jenner Adrianne L, Aogo Rosemary A, Davis Courtney L, Smith Amber M, Craig Morgan

2020-Sep-24

Bacteria, Computational modeling, Infectious diseases, Mathematics, Parasites, Viruses

General General

Deep Learning of Warping Functions for Shape Analysis.

In Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops

Rate-invariant or reparameterization-invariant matching between functions and shapes of curves, respectively, is an important problem in computer vision and medical imaging. Often, the computational cost of matching using approaches such as dynamic time warping or dynamic programming is prohibitive for large datasets. Here, we propose a deep neural-network-based approach for learning the warping functions from training data consisting of a large number of optimal matches, and use it to predict optimal diffeomorphic warping functions. Results show prediction performance on a synthetic dataset of bump functions and two-dimensional curves from the ETH-80 dataset as well as a significant reduction in computational cost.

Nunez Elvis, Joshi Shantanu H

2020-Jun

General General

Learning distinctive filters for COVID-19 detection from chest X-ray using shuffled residual CNN.

In Applied soft computing

COVID-19 is a deadly viral infection that has brought a significant threat to human lives. Automatic diagnosis of COVID-19 from medical imaging enables precise medication, helps to control community outbreak, and reinforces coronavirus testing methods in place. While there exist several challenges in manually inferring traces of this viral infection from X-ray, Convolutional Neural Network (CNN) can mine data patterns that capture subtle distinctions between infected and normal X-rays. To enable automated learning of such latent features, a custom CNN architecture has been proposed in this research. It learns unique convolutional filter patterns for each kind of pneumonia. This is achieved by restricting certain filters in a convolutional layer to maximally respond only to a particular class of pneumonia/COVID-19. The CNN architecture integrates different convolution types to aid better context for learning robust features and strengthen gradient flow between layers. The proposed work also visualizes regions of saliency on the X-ray that have had the most influence on CNN's prediction outcome. To the best of our knowledge, this is the first attempt in deep learning to learn custom filters within a single convolutional layer for identifying specific pneumonia classes. Experimental results demonstrate that the proposed work has significant potential in augmenting current testing methods for COVID-19. It achieves an F1-score of 97.20% and an accuracy of 99.80% on the COVID-19 X-ray set.

Karthik R, Menaka R, M Hariharan

2020-Sep-23

CNN, COVID-19, Chest X-ray, Deep learning, Pneumonia

Radiology Radiology

Distant metastasis prediction via a multi-feature fusion model in breast cancer.

In Aging ; h5-index 49.0

This study aimed to develop a model that fused multiple features (multi-feature fusion model) for predicting metachronous distant metastasis (DM) in breast cancer (BC) based on clinicopathological characteristics and magnetic resonance imaging (MRI). A nomogram based on clinicopathological features (clinicopathological-feature model) and a nomogram based on the multi-feature fusion model were constructed based on BC patients with DM (n=67) and matched patients (n=134) without DM. DM was diagnosed on average (17.31±13.12) months after diagnosis. The clinicopathological-feature model included seven features: reproductive history, lymph node metastasis, estrogen receptor status, progesterone receptor status, CA153, CEA, and endocrine therapy. The multi-feature fusion model included the same features and an additional three MRI features (multiple masses, fat-saturated T2WI signal, and mass size). The multi-feature fusion model was relatively better at predicting DM. The sensitivity, specificity, diagnostic accuracy and AUC of the multi-feature fusion model were 0.746 (95% CI: 0.623-0.841), 0.806 (0.727-0.867), 0.786 (0.723-0.841), and 0.854 (0.798-0.911), respectively. Both internal and external validations suggested good generalizability of the multi-feature fusion model to the clinic. The incorporation of MRI factors significantly improved the specificity and sensitivity of the nomogram. The constructed multi-feature fusion nomogram may guide DM screening and the implementation of prophylactic treatment for BC.

Ma Wenjuan, Wang Xin, Xu Guijun, Liu Zheng, Yin Zhuming, Xu Yao, Wu Haixiao, Baklaushev Vladimir P, Peltzer Karl, Sun Henian, Kharchenko Natalia V, Qi Lisha, Mao Min, Li Yanbo, Liu Peifang, Chekhonin Vladimir P, Zhang Chao

2020-Sep-28

artificial intelligence, breast neoplasms, early detection, neoplasm metastasis

General General

Near-hysteresis-free soft tactile electronic skins for wearables and reliable machine learning.

In Proceedings of the National Academy of Sciences of the United States of America

Electronic skins are essential for real-time health monitoring and tactile perception in robots. Although the use of soft elastomers and microstructures have improved the sensitivity and pressure-sensing range of tactile sensors, the intrinsic viscoelasticity of soft polymeric materials remains a long-standing challenge resulting in cyclic hysteresis. This causes sensor data variations between contact events that negatively impact the accuracy and reliability. Here, we introduce the Tactile Resistive Annularly Cracked E-Skin (TRACE) sensor to address the inherent trade-off between sensitivity and hysteresis in tactile sensors when using soft materials. We discovered that piezoresistive sensors made using an array of three-dimensional (3D) metallic annular cracks on polymeric microstructures possess high sensitivities (> 107 Ω ⋅ kPa-1), low hysteresis (2.99 ± 1.37%) over a wide pressure range (0-20 kPa), and fast response (400 Hz). We demonstrate that TRACE sensors can accurately detect and measure the pulse wave velocity (PWV) when skin mounted. Moreover, we show that these tactile sensors when arrayed enabled fast reliable one-touch surface texture classification with neuromorphic encoding and deep learning algorithms.

Yao Haicheng, Yang Weidong, Cheng Wen, Tan Yu Jun, See Hian Hian, Li Si, Ali Hashina Parveen Anwar, Lim Brian Z H, Liu Zhuangjian, Tee Benjamin C K

2020-Sep-28

electronic skin, machine learning, robotics, sensor, wearable

General General

Placing language in an integrated understanding system: Next steps toward human-level performance in neural language models.

In Proceedings of the National Academy of Sciences of the United States of America

Language is crucial for human intelligence, but what exactly is its role? We take language to be a part of a system for understanding and communicating about situations. In humans, these abilities emerge gradually from experience and depend on domain-general principles of biological neural networks: connection-based learning, distributed representation, and context-sensitive, mutual constraint satisfaction-based processing. Current artificial language processing systems rely on the same domain general principles, embodied in artificial neural networks. Indeed, recent progress in this field depends on query-based attention, which extends the ability of these systems to exploit context and has contributed to remarkable breakthroughs. Nevertheless, most current models focus exclusively on language-internal tasks, limiting their ability to perform tasks that depend on understanding situations. These systems also lack memory for the contents of prior situations outside of a fixed contextual span. We describe the organization of the brain's distributed understanding system, which includes a fast learning system that addresses the memory problem. We sketch a framework for future models of understanding drawing equally on cognitive neuroscience and artificial intelligence and exploiting query-based attention. We highlight relevant current directions and consider further developments needed to fully capture human-level language understanding in a computational system.

McClelland James L, Hill Felix, Rudolph Maja, Baldridge Jason, Schütze Hinrich

2020-Sep-28

artificial intelligence, cognitive neuroscience, deep learning, natural language understanding, situation models

General General

Pre-trained language model augmented adversarial training network for Chinese clinical event detection.

In Mathematical biosciences and engineering : MBE

Clinical event detection (CED) is a hot topic and essential task in medical artificial intelligence, which has attracted the attention from academia and industry over the recent years. However, most studies focus on English clinical narratives. Owing to the limitation of annotated Chinese medical corpus, there is a lack of relevant research about Chinese clinical narratives. The existing methods ignore the importance of contextual information in semantic understanding. Therefore, it is urgent to research multilingual clinical event detection. In this paper, we present a novel encoder-decoder structure based on pre-trained language model for Chinese CED task, which integrates contextual representations into Chinese character embeddings to assist model in semantic understanding. Compared with existing methods, our proposed strategy can help model harvest a language inferential skill. Besides, we introduce the punitive weight to adjust the proportion of loss on each category for coping with class imbalance problem. To evaluate the effectiveness of our proposed model, we conduct a range of experiments on test set of our manually annotated corpus. We compare overall performance of our proposed model with baseline models on our manually annotated corpus. Experimental results demonstrate that our proposed model achieves the best precision of 83.73%, recall of 86.56% and F1-score of 85.12%. Moreover, we also evaluate the performance of our proposed model with baseline models on minority category samples. We discover that our proposed model obtains a significant increase on minority category samples.

Zhang Zhi Chang, Zhang Min Yu, Zhou Tong, Qiu Yan Long

2020-Mar-24

** Chinese clinical event detection , Chinese clinical narratives , adversarial training network , class imbalance problem , medical artificial intelligence , pre-trained language model , semantic understanding , transfer learning **

General General

Leveraging Computational Modeling to Understand Infectious Diseases.

In Current pathobiology reports

Purpose of Review : Computational and mathematical modeling have become a critical part of understanding in-host infectious disease dynamics and predicting effective treatments. In this review, we discuss recent findings pertaining to the biological mechanisms underlying infectious diseases, including etiology, pathogenesis, and the cellular interactions with infectious agents. We present advances in modeling techniques that have led to fundamental disease discoveries and impacted clinical translation.

Recent Findings : Combining mechanistic models and machine learning algorithms has led to improvements in the treatment of Shigella and tuberculosis through the development of novel compounds. Modeling of the epidemic dynamics of malaria at the within-host and between-host level has afforded the development of more effective vaccination and antimalarial therapies. Similarly, in-host and host-host models have supported the development of new HIV treatment modalities and an improved understanding of the immune involvement in influenza. In addition, large-scale transmission models of SARS-CoV-2 have furthered the understanding of coronavirus disease and allowed for rapid policy implementations on travel restrictions and contract tracing apps.

Summary : Computational modeling is now more than ever at the forefront of infectious disease research due to the COVID-19 pandemic. This review highlights how infectious diseases can be better understood by connecting scientists from medicine and molecular biology with those in computer science and applied mathematics.

Jenner Adrianne L, Aogo Rosemary A, Davis Courtney L, Smith Amber M, Craig Morgan

2020-Sep-24

Bacteria, Computational modeling, Infectious diseases, Mathematics, Parasites, Viruses

Pathology Pathology

Rocky road to digital diagnostics: implementation issues and exhilarating experiences.

In Journal of clinical pathology

Since 2007, we have gradually been building up infrastructure for digital pathology, starting with a whole slide scanner park to build up a digital archive to streamline doing multidisciplinary meetings, student teaching and research, culminating in a full digital diagnostic workflow where we are currently integrating artificial intelligence algorithms. In this paper, we highlight the different steps in this process towards digital diagnostics, which was at times a rocky road with definitely issues in implementation, but eventually an exciting new way to practice pathology in a more modern and efficient way where patient safety has clearly gone up.

Stathonikos Nikolaos, Nguyen Tri Q, van Diest Paul J

2020-Sep-28

computer systems, computer-assisted, health care, hospital, image processing, medical informatics computing, pathology department, quality assurance

oncology Oncology

Characterizing CDK12-Mutated Prostate Cancers.

In Clinical cancer research : an official journal of the American Association for Cancer Research

PURPOSE : CDK12 aberrations have been reported as a biomarker of response to immunotherapy for metastatic castration-resistant prostate cancer (mCRPC). Herein, we characterize CDK12-mutated mCRPC, presenting clinical, genomic, and tumor-infiltrating lymphocyte data.

EXPERIMENTAL DESIGN : Patients with mCRPC consented to the molecular analyses of diagnostic and metastatic CRPC biopsies. Genomic analyses involved targeted next generation (MiSeqTM; Illumina) and exome sequencing (NovaSeqTM; Illumina). Tumor-infiltrating lymphocytes (TIL) were assessed by validated immunocytochemistry coupled with deep learning-based artificial intelligence analyses including multiplex immunofluorescence assays for CD4, CD8, and FOXP3 evaluating TIL subsets. The control group comprised a randomly selected mCRPC cohort with sequencing and clinical data available.

RESULTS : Biopsies from 913 patients underwent targeted sequencing between Feb/15 and Oct/19. Forty-three patients (4.7%) had tumors with CDK12 alterations. CDK12 altered cancers had distinctive features, with some revealing high chromosomal break numbers in exome sequencing. Biallelic CDK12-aberrant mCRPC had shorter overall survival from diagnosis than controls (5.1 years [95% CI: 4.0, 7.9] vs 6.4 years [95% CI: 5.7, 7.8]; HR=1.65 [95% CI: 1.07, 2.53]; P=0.02). Median intratumoral CD3+ cell density was higher in CDK12 cancers, although this was not statistically significant (203.7 versus 86.7 cells/mm2, P=0.07). This infiltrate primarily comprised CD4+FOXP3- cells (50.5 versus 6.2 cells/mm2, P<0.0001), where high counts tended to be associated with worse survival from diagnosis (HR=1.64; 95% CI: [0.95, 2.84], P=0.077) in the overall population.

CONCLUSIONS : CDK12-altered mCRPCs have worse prognosis with these tumors surprisingly being primarily enriched for CD4+FOXP3- cells that seem to associate with worse outcome and may be immunosuppressive.

Rescigno Pasquale, Gurel Bora, Pereira Rita, Crespo Mateus, Rekowski Jan, Rediti Mattia, Barrero Maialen, Mateo Joaquin, Bianchini Diletta, Messina Carlo, Fenor de la Maza M D, Chandran Khobe, Carmichael Juliet, Guo Christina, Paschalis Alec, Sharp Adam, Seed George, Figueiredo Ines, Lambros Maryou Bk, Miranda Susana, Ferreira Ana, Bertan Claudia, Riisnaes Ruth, Porta Nuria, Yuan Wei, Carreira Suzanne, de Bono Johann S

2020-Sep-28

Surgery Surgery

Visual gaze patterns reveal surgeons' ability to identify risk of bile duct injury during laparoscopic cholecystectomy.

In HPB : the official journal of the International Hepato Pancreato Biliary Association

BACKGROUND : Bile duct injury is a serious surgical complication of laparoscopic cholecystectomy. The aim of this study was to identify distinct visual gaze patterns associated with the prompt detection of bile duct injury risk during laparoscopic cholecystectomy.

METHODS : Twenty-nine participants viewed a laparoscopic cholecystectomy that led to a serious bile duct injury ('BDI video') and an uneventful procedure ('control video') and reported when an error was perceived that could result in bile duct injury. Outcome parameters include fixation sequences on anatomical structures and eye tracking metrics. Surgeons were stratified into two groups based on performance and compared.

RESULTS : The 'early detector' group displayed reduced common bile duct dwell time in the first half of the BDI video, as well as increased cystic duct dwell time and Calot's triangle glances count during Calot's triangle dissection in the control video. Machine learning based classification of fixation sequences demonstrated clear separability between early and late detector groups.

CONCLUSION : There are discernible differences in gaze patterns associated with early recognition of impending bile duct injury. The results could be transitioned into real time and used as an intraoperative early warning system and in an educational setting to improve surgical safety and performance.

Sharma Chetanya, Singh Harsmirat, Orihuela-Espina Felipe, Darzi Ara, Sodergren Mikael H

2020-Sep-26

Radiology Radiology

Differentiating patients with schizophrenia from healthy controls by hippocampal subfields using radiomics.

In Schizophrenia research ; h5-index 61.0

BACKGROUND : Accurately diagnosing schizophrenia is still challenging due to the lack of validated biomarkers. Here, we aimed to investigate whether radiomic features in bilateral hippocampal subfields from magnetic resonance images (MRIs) can differentiate patients with schizophrenia from healthy controls (HCs).

METHODS : A total of 152 participants with MRI (86 schizophrenia and 66 HCs) were allocated to training (n = 106) and test (n = 46) sets. Radiomic features (n = 642) from the bilateral hippocampal subfields processed with automatic segmentation techniques were extracted from T1-weighted MRIs. After feature selection, various combinations of classifiers (logistic regression, extra-trees, AdaBoost, XGBoost, or support vector machine) and subsampling were trained. The performance of the classifier was validated in the test set by determining the area under the curve (AUC). Furthermore, the association between selected radiomic features and clinical symptoms in schizophrenia was assessed.

RESULTS : Thirty radiomic features were identified to differentiate participants with schizophrenia from HCs. In the training set, the AUC exhibited poor to good performance (range: 0.683-0.861). The best performing radiomics model in the test set was achieved by the mutual information feature selection and logistic regression with an AUC, accuracy, sensitivity, and specificity of 0.821 (95% confidence interval 0.681-0.961), 82.1%, 76.9%, and 70%, respectively. Greater maximum values in the left cornu ammonis 1-3 subfield were associated with a higher severity of positive symptoms and general psychopathology in participants with schizophrenia.

CONCLUSION : Radiomic features from hippocampal subfields may be useful biomarkers for identifying schizophrenia.

Park Yae Won, Choi Dongmin, Lee Joonho, Ahn Sung Soo, Lee Seung-Koo, Lee Sang-Hyuk, Bang Minji

2020-Sep-25

Artificial intelligence, Hippocampus, Machine learning, Magnetic resonance imaging, Radiomics, Schizophrenia

Radiology Radiology

Pancreatic Cancer Imaging: A New Look at an Old Problem.

In Current problems in diagnostic radiology

Computed tomography is the most commonly used imaging modality to detect and stage pancreatic cancer. Previous advances in pancreatic cancer imaging have focused on optimizing image acquisition parameters and reporting standards. However, current state-of-the-art imaging approaches still misdiagnose some potentially curable pancreatic cancers and do not provide prognostic information or inform optimal management strategies beyond stage. Several recent developments in pancreatic cancer imaging, including artificial intelligence and advanced visualization techniques, are rapidly changing the field. The purpose of this article is to review how these recent advances have the potential to revolutionize pancreatic cancer imaging.

Chu Linda C, Park Seyoun, Kawamoto Satomi, Yuille Alan L, Hruban Ralph H, Fishman Elliot K

2020-Aug-26

Surgery Surgery

Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery.

In Surgery ; h5-index 54.0

BACKGROUND : Our previous work classified a taxonomy of needle driving gestures during a vesicourethral anastomosis of robotic radical prostatectomy in association with tissue tears and patient outcomes. Herein, we train deep learning-based computer vision to automate the identification and classification of suturing gestures for needle driving attempts.

METHODS : Two independent raters manually annotated live suturing video clips to label timepoints and gestures. Identification (2,395 videos) and classification (511 videos) datasets were compiled to train computer vision models to produce 2- and 5-class label predictions, respectively. Networks were trained on inputs of raw red/blue/green pixels as well as optical flow for each frame. We explore the effect of different recurrent models (long short-term memory versus convolutional long short-term memory). All models were trained on 80/20 train/test splits.

RESULTS : We observe that all models are able to reliably predict either the presence of a gesture (identification, area under the curve: 0.88) as well as the type of gesture (classification, area under the curve: 0.87) at significantly above chance levels. For both gesture identification and classification datasets, we observed no effect of recurrent classification model choice on performance.

CONCLUSION : Our results demonstrate computer vision's ability to recognize features that not only can identify the action of suturing but also distinguish between different classifications of suturing gestures. This demonstrates the potential to utilize deep learning computer vision toward future automation of surgical skill assessment.

Luongo Francisco, Hakim Ryan, Nguyen Jessica H, Anandkumar Animashree, Hung Andrew J

2020-Sep-25

Ophthalmology Ophthalmology

Advanced vascular examinations of the retina and optic nerve head in glaucoma.

In Progress in brain research

Recent technological breakthroughs have facilitated vascular research in the field of glaucoma. In this chapter, we review several of these vascular-oriented technologies, with a special focus given to optical coherence tomography angiography. An update is given regarding recent findings in glaucoma, but also on the improvements needed to bring vascular assessments closer to everyday clinical practice.

Barbosa Breda João, Van Eijgen Jan, Stalmans Ingeborg

2020

Adaptive optics, Artificial intelligence, Doppler OCT, Glaucoma, OCT-A, OCTA, Retinal oximetry, Vascular

General General

A systematic review of statistical models and outcomes of predicting fatal and serious injury crashes from driver crash and offense history data.

In Systematic reviews

BACKGROUND : Expenditure on driver-related behavioral interventions and road use policy is often justified by their impact on the frequency of fatal and serious injury crashes. Given the rarity of fatal and serious injury crashes, offense history, and crash history of drivers are sometimes used as an alternative measure of the impact of interventions and changes to policy. The primary purpose of this systematic review was to assess the rigor of statistical modeling used to predict fatal and serious crashes from offense history and crash history using a purpose-made quality assessment tool. A secondary purpose was to explore study outcomes.

METHODS : Only studies that used observational data and presented a statistical model of crash prediction from offense history or crash history were included. A quality assessment tool was developed for the systematic evaluation of statistical quality indicators across studies. The search was conducted in June 2019.

RESULTS : One thousand one hundred and five unique records were identified, 252 full texts were screened for inclusion, resulting in 20 studies being included in the review. The results indicate substantial and important limitations in the modeling methods used. Most studies demonstrated poor statistical rigor ranging from low to middle quality. There was a lack of confidence in published findings due to poor variable selection, poor adherence to statistical assumptions relating to multicollinearity, and lack of validation using new data.

CONCLUSIONS : It was concluded that future research should consider machine learning to overcome correlations in the data, use rigorous vetting procedures to identify predictor variables, and validate statistical models using new data to improve utility and generalizability of models.

SYSTEMATIC REVIEW REGISTRATION : PROSPERO CRD42019137081.

Slikboer Reneta, Muir Samuel D, Silva S S M, Meyer Denny

2020-Sep-28

Crash, Crash history, Driver offenses, Offense, Quality assessment tool, Statistical modeling, Statistics, Systematic review, Traffic

General General

Detection and Classification of Gastrointestinal Diseases using Machine Learning.

In Current medical imaging

BACKGROUND : Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by the physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors in the analysis of images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images.

INTRODUCTION : In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-the-art methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken.

CONCLUSION : This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. Study of this article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.

RESULTS : A comparative analysis of different approaches for the detection and classification of GI infections.

Naz Javeria, Sharif Muhammad, Yasmin Mussarat, Raza Mudassar, Khan Muhammad Attique

2020-Sep-28

Computer aided design (CAD), Convolutional Neural Network (CNN), Gastrointestinal Tract (GIT), Handcrafted features. , Machine learning, Wireless Capsule Endoscopy (WCE)

General General

Predicting Procedure Step Performance From Operator and Text Features: A Critical First Step Toward Machine Learning-Driven Procedure Design.

In Human factors

OBJECTIVE : The goal of this study is to assess machine learning for predicting procedure performance from operator and procedure characteristics.

BACKGROUND : Procedures are vital for the performance and safety of high-risk industries. Current procedure design guidelines are insufficient because they rely on subjective assessments and qualitative analyses that struggle to integrate and quantify the diversity of factors that influence procedure performance.

METHOD : We used data from a 25-participant study with four procedures, conducted on a high-fidelity oil extraction simulation to develop logistic regression (LR), random forest (RF), and decision tree (DT) algorithms that predict procedure step performance from operator, step, readability, and natural language processing-based features. Features were filtered using the Boruta approach. The algorithms were trained and optimized with a repeated 10-fold cross-validation. After training, inference was performed using variable importance and partial dependence plots.

RESULTS : The RF, DT, and LR algorithms with all features had an area under the receiver operating characteristic curve (AUC) of 0.78, 0.77, and 0.75, respectively, and significantly outperformed the LR with only operator features (LROP), with an AUC of 0.61. The most important features were experience, familiarity, total words, and character-based metrics. The partial dependence plots showed that steps with fewer words, abbreviations, and characters were correlated with correct step performance.

CONCLUSION : Machine learning algorithms are a promising approach for predicting step-level procedure performance, with acknowledged limitations on interpolating to nonobserved data, and may help guide procedure design after validation with additional data on further tasks.

APPLICATION : After validation, the inferences from these models can be used to generate procedure design alternatives.

McDonald Anthony D, Ade Nilesh, Peres S Camille

2020-Sep-28

decision tree, machine learning, operator performance, procedure design, random forest

General General

The new design of cows' behavior classifier based on acceleration data and proposed feature set.

In Mathematical biosciences and engineering : MBE

Monitor and classify behavioral activities in cows is a helpful support solution for livestock based on the analysis of data from sensors attached to the animal. Accelerometers are particularly suited for monitoring cow behaviors due to small size, lightweight and high accuracy. Nevertheless, the interpretation of the data collected by such sensors when characterizing the type of behaviors still brings major challenges to developers, related to activity complexity (i.e., certain behaviors contain similar gestures). This paper presents a new design of cows' behavior classifier based on acceleration data and proposed feature set. Analysis of cow acceleration data is used to extract features for classification using machine learning algorithms. We found that with 5 features (mean, standard deviation, root mean square, median, range) and 16-second window of data (1 sample/second), classification of seven cow behaviors (including feeding, lying, standing, lying down, standing up, normal walking, active walking) achieved the overall highest performance. We validated the results with acceleration data from a public source. Performance of our proposed classifier was evaluated and compared to existing ones in terms of the sensitivity, the accuracy, the positive predictive value, and the negative predictive value.

Phi Khanh Phung Cong, Tran Duc-Tan, Duong Van Tu, Thinh Nguyen Hong, Tran Duc-Nghia

2020-Mar-11

** acceleration , classification , cow , monitoring , sensor **

General General

Learning distinctive filters for COVID-19 detection from chest X-ray using shuffled residual CNN.

In Applied soft computing

COVID-19 is a deadly viral infection that has brought a significant threat to human lives. Automatic diagnosis of COVID-19 from medical imaging enables precise medication, helps to control community outbreak, and reinforces coronavirus testing methods in place. While there exist several challenges in manually inferring traces of this viral infection from X-ray, Convolutional Neural Network (CNN) can mine data patterns that capture subtle distinctions between infected and normal X-rays. To enable automated learning of such latent features, a custom CNN architecture has been proposed in this research. It learns unique convolutional filter patterns for each kind of pneumonia. This is achieved by restricting certain filters in a convolutional layer to maximally respond only to a particular class of pneumonia/COVID-19. The CNN architecture integrates different convolution types to aid better context for learning robust features and strengthen gradient flow between layers. The proposed work also visualizes regions of saliency on the X-ray that have had the most influence on CNN's prediction outcome. To the best of our knowledge, this is the first attempt in deep learning to learn custom filters within a single convolutional layer for identifying specific pneumonia classes. Experimental results demonstrate that the proposed work has significant potential in augmenting current testing methods for COVID-19. It achieves an F1-score of 97.20% and an accuracy of 99.80% on the COVID-19 X-ray set.

Karthik R, Menaka R, M Hariharan

2020-Sep-23

CNN, COVID-19, Chest X-ray, Deep learning, Pneumonia

General General

Single-pixel imaging 12 years on: a review.

In Optics express

Modern cameras typically use an array of millions of detector pixels to capture images. By contrast, single-pixel cameras use a sequence of mask patterns to filter the scene along with the corresponding measurements of the transmitted intensity which is recorded using a single-pixel detector. This review considers the development of single-pixel cameras from the seminal work of Duarte et al. up to the present state of the art. We cover the variety of hardware configurations, design of mask patterns and the associated reconstruction algorithms, many of which relate to the field of compressed sensing and, more recently, machine learning. Overall, single-pixel cameras lend themselves to imaging at non-visible wavelengths and with precise timing or depth resolution. We discuss the suitability of single-pixel cameras for different application areas, including infrared imaging and 3D situation awareness for autonomous vehicles.

Gibson Graham M, Johnson Steven D, Padgett Miles J

2020-Sep-14

General General

Plaintext attack on joint transform correlation encryption system by convolutional neural network.

In Optics express

The image encryption system based on joint transform correlation has attracted much attention because its ciphertext does not contain complex value and can avoid strict pixel alignment of ciphertext when decryption occurs. This paper proves that the joint transform correlation architecture is vulnerable to the attack of the deep learning method-convolutional neural network. By giving the convolutional neural network a large amount of ciphertext and its corresponding plaintext, it can simulate the key of the encryption system. Unlike the traditional method which uses the phase recovery algorithm to retrieve or estimate optical encryption key, the key model trained in this paper can directly convert the ciphertext to the corresponding plaintext. Compared with the existing neural network systems, this paper uses the sigmoid activation function and adds dropout layers to make the calculation of the neural network more rapid and accurate, and the equivalent key trained by the neural network has certain robustness. Computer simulations prove the feasibility and effectiveness of this method.

Chen Linfei, Peng BoYan, Gan Wenwen, Liu Yuanqian

2020-Sep-14

General General

Quantitative phase imaging in dual-wavelength interferometry using a single wavelength illumination and deep learning.

In Optics express

In this manuscript, we propose a quantitative phase imaging method based on deep learning, using a single wavelength illumination to realize dual-wavelength phase-shifting phase recovery. By using the conditional generative adversarial network (CGAN), from one interferogram recorded at a single wavelength, we obtain interferograms at other wavelengths, the corresponding wrapped phases and then the phases at synthetic wavelengths. The feasibility of the proposed method is verified by simulation and experiments. The results demonstrate that the measurement range of single-wavelength interferometry (SWI) is improved by keeping a simple setup, avoiding the difficulty caused by using two wavelengths simultaneously. This will provide an effective solution for the problem of phase unwrapping and the measurement range limitation in phase-shifting interferometry.

Li Jiaosheng, Zhang Qinnan, Zhong Liyun, Tian Jindong, Pedrini Giancarlo, Lu Xiaoxu

2020-Sep-14

General General

Learned SPARCOM: unfolded deep super-resolution microscopy.

In Optics express

The use of photo-activated fluorescent molecules to create long sequences of low emitter-density diffraction-limited images enables high-precision emitter localization, but at the cost of low temporal resolution. We suggest combining SPARCOM, a recent high-performing classical method, with model-based deep learning, using the algorithm unfolding approach, to design a compact neural network incorporating domain knowledge. Our results show that we can obtain super-resolution imaging from a small number of high emitter density frames without knowledge of the optical system and across different test sets using the proposed learned SPARCOM (LSPARCOM) network. We believe LSPARCOM can pave the way to interpretable, efficient live-cell imaging in many settings, and find broad use in single molecule localization microscopy of biological structures.

Dardikman-Yoffe Gili, Eldar Yonina C

2020-Sep-14

General General

Distributed fiber sensor and machine learning data analytics for pipeline protection against extrinsic intrusions and intrinsic corrosions.

In Optics express

This paper presents an integrated technical framework to protect pipelines against both malicious intrusions and piping degradation using a distributed fiber sensing technology and artificial intelligence. A distributed acoustic sensing (DAS) system based on phase-sensitive optical time-domain reflectometry (φ-OTDR) was used to detect acoustic wave propagation and scattering along pipeline structures consisting of straight piping and sharp bend elbow. Signal to noise ratio of the DAS system was enhanced by femtosecond induced artificial Rayleigh scattering centers. Data harnessed by the DAS system were analyzed by neural network-based machine learning algorithms. The system identified with over 85% accuracy in various external impact events, and over 94% accuracy for defect identification through supervised learning and 71% accuracy through unsupervised learning.

Peng Zhaoqiang, Jian Jianan, Wen Hongqiao, Gribok Andrei, Wang Mohan, Liu Hu, Huang Sheng, Mao Zhi-Hong, Chen Kevin P

2020-Sep-14

General General

Analyzing Malaria Disease Using Effective Deep Learning Approach.

In Diagnostics (Basel, Switzerland)

Medical tools used to bolster decision-making by medical specialists who offer malaria treatment include image processing equipment and a computer-aided diagnostic system. Malaria images can be employed to identify and detect malaria using these methods, in order to monitor the symptoms of malaria patients, although there may be atypical cases that need more time for an assessment. This research used 7000 images of Xception, Inception-V3, ResNet-50, NasNetMobile, VGG-16 and AlexNet models for verification and analysis. These are prevalent models that classify the image precision and use a rotational method to improve the performance of validation and the training dataset with convolutional neural network models. Xception, using the state of the art activation function (Mish) and optimizer (Nadam), improved the effectiveness, as found by the outcomes of the convolutional neural model evaluation of these models for classifying the malaria disease from thin blood smear images. In terms of the performance, recall, accuracy, precision, and F1 measure, a combined score of 99.28% was achieved. Consequently, 10% of all non-dataset training and testing images were evaluated utilizing this pattern. Notable aspects for the improvement of a computer-aided diagnostic to produce an optimum malaria detection approach have been found, supported by a 98.86% accuracy level.

Sriporn Krit, Tsai Cheng-Fa, Tsai Chia-En, Wang Paohsi

2020-Sep-24

activation function (Mish), convolutional neural network, deep learning, image classification, image processing, malaria, optimization methods

General General

Development of a Prediction Model for Demolition Waste Generation Using a Random Forest Algorithm Based on Small DataSets.

In International journal of environmental research and public health ; h5-index 73.0

Recently, artificial intelligence (AI) technologies have been employed to predict construction and demolition (C&D) waste generation. However, most studies have used machine learning models with continuous data input variables, applying algorithms, such as artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines, linear regression analysis, decision trees, and genetic algorithms. Therefore, machine learning algorithms may not perform as well when applied to categorical data. This article uses machine learning algorithms to predict C&D waste generation from a dataset, as a way to improve the accuracy of waste management in C&D facilities. These datasets include categorical (e.g., region, building structure, building use, wall material, and roofing material), and continuous data (particularly, gloss floor area), and a random forest (RF) algorithm was used. Results indicate that RF is an adequate machine learning algorithm for a small dataset consisting of categorical data, and even with a small dataset, an adequate prediction model can be developed. Despite the small dataset, the predictive performance according to the demolition waste (DW) type was R (Pearson's correlation coefficient) = 0.691-0.871, R2 (coefficient of determination) = 0.554-0.800, showing stable prediction performance. High prediction performance was observed using three (for mortar), five (for other DW types), or six (for concrete) input variables. This study is significant because the proposed RF model can predict DW generation using a small amount of data. Additionally, it demonstrates the possibility of applying AI to multi-purpose DW management.

Cha Gi-Wook, Moon Hyeun Jun, Kim Young-Min, Hong Won-Hwa, Hwang Jung-Ha, Park Won-Jun, Kim Young-Chan

2020-Sep-24

construction waste management, demolition waste management, leave-one-out cross-validation, prediction model, random forest, small data

General General

Quantifying the Predictive Role of Temperament Dimensions and Attachment Styles on the Five Factor Model of Personality.

In Behavioral sciences (Basel, Switzerland)

BACKGROUND : The present study investigated the role of temperament and attachment security in predicting individual differences in the five factor personality traits among adults. As previous studies suggested the potential moderating role of attachment in the association between temperament and personality traits, the present study sought to examine an interactionist model combining attachment and temperament in explaining individual differences in personality traits.

METHODS : A sample of 1871 participants (1151 women and 719 men) completed self-report measures of adult attachment style (the Relationships Questionnaire-RQ), temperament dimension (the Fisher Temperament Inventory-FTI), and personality domain (the Five Factor Model-FFM).

RESULTS : Partial correlational analyses revealed associations between attachment security and each of the five domains of the FFM, and few associations between some temperament dimensions and several domains of the FFM. Moderated regression analyses showed that attachment security moderated the associations between temperament dimensions and the Agreeableness domain of the FFM. Among secure individuals, those with higher scores on the Curious/Energetic, Cautious/Social Norm Compliant and Prosocial/Empathetic scales exhibited higher Agreeableness scores, whereas among insecure individuals, those with higher scores on the Analytic/Tough-minded scale exhibited lower scores on the Agreeableness scale.

CONCLUSION : Overall, the current study provides evidence in support of the substantive role of social-environmental factors (Adult Attachment) as a moderating element bridging temperament-related personality elements and a number of their FFM manifestations.

Barel Efrat, Mizrachi Yonathan, Nachmani Mayyan

2020-Sep-24

attachment, personality, temperament

Public Health Public Health

Association of Virulence and Antibiotic Resistance in Salmonella-Statistical and Computational Insights into a Selected Set of Clinical Isolates.

In Microorganisms

The acquisition of antibiotic resistance (AR) by foodborne pathogens, such as Salmonella enterica, has emerged as a serious public health concern. The relationship between the two key survival mechanisms (i.e., antibiotic resistance and virulence) of bacterial pathogens is complex. However, it is unclear if the presence of certain virulence determinants (i.e., virulence genes) and AR have any association in Salmonella. In this study, we report the prevalence of selected virulence genes and their association with AR in a set of phenotypically tested antibiotic-resistant (n = 117) and antibiotic-susceptible (n = 94) clinical isolates of Salmonella collected from Tennessee, USA. Profiling of virulence genes (i.e., virulotyping) in Salmonella isolates (n = 211) was conducted by targeting 13 known virulence genes and a gene for class 1 integron. The association of the presence/absence of virulence genes in an isolate with their AR phenotypes was determined by the machine learning algorithm Random Forest. The analysis revealed that Salmonella virulotypes with gene clusters consisting of avrA, gipA, sodC1, and sopE1 were strongly associated with any resistant phenotypes. To conclude, the results of this exploratory study shed light on the association of specific virulence genes with drug-resistant phenotypes of Salmonella. The presence of certain virulence genes clusters in resistant isolates may become useful for the risk assessment and management of salmonellosis caused by drug-resistant Salmonella in humans.

Higgins Daleniece, Mukherjee Nabanita, Pal Chandan, Sulaiman Irshad M, Jiang Yu, Hanna Samir, Dunn John R, Karmaus Wilfried, Banerjee Pratik

2020-Sep-24

Salmonella, antibiotic resistance, virulence, virulotyping

General General

Robust detection of neural spikes using sparse coding based features.

In Mathematical biosciences and engineering : MBE

The detection of neural spikes plays an important role in studying and processing extracellular recording signals, which promises to be able to extract the necessary spike data for all subsequent analyses. The existing algorithms for spike detection have achieved great progress but there still remains much room for improvement in terms of the robustness to noise and the flexibility in the spike shape. To address this issue, this paper presents a novel method for spike detection based on the theory of sparse representation. By analyzing the characteristics of extracellular neural recordings, a targetdriven sparse representation framework is firstly constructed, with which the neural spike signals can be effectively separated from background noise. In addition, considering the fact that the spikes emitted by different neurons have different shapes, we then learn a universal dictionary to give a sparse representation of various spike signals. Finally, the information (location and number) of spikes in the recorded signal are achieved by comprehensively analyzing the sparse features. Experimental results demonstrate that the proposed method outperforms the existing methods in the spike detection problem.

Liu Zuo Zhi, Wang Xiao Tian, Yuan Quan

2020-Jun-15

** dictionary learning , neural spike detection , sparse feature , sparse representation **

General General

EEG-Based Emotion Classification Using a Deep Neural Network and Sparse Autoencoder.

In Frontiers in systems neuroscience

Emotion classification based on brain-computer interface (BCI) systems is an appealing research topic. Recently, deep learning has been employed for the emotion classifications of BCI systems and compared to traditional classification methods improved results have been obtained. In this paper, a novel deep neural network is proposed for emotion classification using EEG systems, which combines the Convolutional Neural Network (CNN), Sparse Autoencoder (SAE), and Deep Neural Network (DNN) together. In the proposed network, the features extracted by the CNN are first sent to SAE for encoding and decoding. Then the data with reduced redundancy are used as the input features of a DNN for classification task. The public datasets of DEAP and SEED are used for testing. Experimental results show that the proposed network is more effective than conventional CNN methods on the emotion recognitions. For the DEAP dataset, the highest recognition accuracies of 89.49% and 92.86% are achieved for valence and arousal, respectively. For the SEED dataset, however, the best recognition accuracy reaches 96.77%. By combining the CNN, SAE, and DNN and training them separately, the proposed network is shown as an efficient method with a faster convergence than the conventional CNN.

Liu Junxiu, Wu Guopei, Luo Yuling, Qiu Senhui, Yang Su, Li Wei, Bi Yifei

2020

EEG, convolutional neural network, deep neural network, emotion recognition, sparse autoencoder

General General

An optimized deep learning architecture for the diagnosis of COVID-19 disease based on gravitational search optimization.

In Applied soft computing

In this paper, a novel approach called GSA-DenseNet121-COVID-19 based on a hybrid convolutional neural network (CNN) architecture is proposed using an optimization algorithm. The CNN architecture that was used is called DenseNet121, and the optimization algorithm that was used is called the gravitational search algorithm (GSA). The GSA is used to determine the best values for the hyperparameters of the DenseNet121 architecture. To help this architecture to achieve a high level of accuracy in diagnosing COVID-19 through chest x-ray images. The obtained results showed that the proposed approach could classify 98.38% of the test set correctly. To test the efficacy of the GSA in setting the optimum values for the hyperparameters of DenseNet121. The GSA was compared to another approach called SSD-DenseNet121, which depends on the DenseNet121 and the optimization algorithm called social ski driver (SSD). The comparison results demonstrated the efficacy of the proposed GSA-DenseNet121-COVID-19. As it was able to diagnose COVID-19 better than SSD-DenseNet121 as the second was able to diagnose only 94% of the test set. The proposed approach was also compared to another method based on a CNN architecture called Inception-v3 and manual search to quantify hyperparameter values. The comparison results showed that the GSA-DenseNet121-COVID-19 was able to beat the comparison method, as the second was able to classify only 95% of the test set samples. The proposed GSA-DenseNet121-COVID-19 was also compared with some related work. The comparison results showed that GSA-DenseNet121-COVID-19 is very competitive.

Ezzat Dalia, Hassanien Aboul Ella, Ella Hassan Aboul

2020-Sep-22

Convolutional neural networks, Deep learning, Gravitational search algorithm, Hyperparameters optimization, SARS-CoV-2, Transfer learning

General General

Clinical features of COVID-19 mortality: development and validation of a clinical prediction model.

In The Lancet. Digital health

Background : The COVID-19 pandemic has affected millions of individuals and caused hundreds of thousands of deaths worldwide. Predicting mortality among patients with COVID-19 who present with a spectrum of complications is very difficult, hindering the prognostication and management of the disease. We aimed to develop an accurate prediction model of COVID-19 mortality using unbiased computational methods, and identify the clinical features most predictive of this outcome.

Methods : In this prediction model development and validation study, we applied machine learning techniques to clinical data from a large cohort of patients with COVID-19 treated at the Mount Sinai Health System in New York City, NY, USA, to predict mortality. We analysed patient-level data captured in the Mount Sinai Data Warehouse database for individuals with a confirmed diagnosis of COVID-19 who had a health system encounter between March 9 and April 6, 2020. For initial analyses, we used patient data from March 9 to April 5, and randomly assigned (80:20) the patients to the development dataset or test dataset 1 (retrospective). Patient data for those with encounters on April 6, 2020, were used in test dataset 2 (prospective). We designed prediction models based on clinical features and patient characteristics during health system encounters to predict mortality using the development dataset. We assessed the resultant models in terms of the area under the receiver operating characteristic curve (AUC) score in the test datasets.

Findings : Using the development dataset (n=3841) and a systematic machine learning framework, we developed a COVID-19 mortality prediction model that showed high accuracy (AUC=0·91) when applied to test datasets of retrospective (n=961) and prospective (n=249) patients. This model was based on three clinical features: patient's age, minimum oxygen saturation over the course of their medical encounter, and type of patient encounter (inpatient vs outpatient and telehealth visits).

Interpretation : An accurate and parsimonious COVID-19 mortality prediction model based on three features might have utility in clinical settings to guide the management and prognostication of patients affected by this disease. External validation of this prediction model in other populations is needed.

Funding : National Institutes of Health.

Yadaw Arjun S, Li Yan-Chak, Bose Sonali, Iyengar Ravi, Bunyavanich Supinda, Pandey Gaurav

2020-Oct

General General

Predicting Psychological State Among Chinese Undergraduate Students in the COVID-19 Epidemic: A Longitudinal Study Using a Machine Learning.

In Neuropsychiatric disease and treatment

Background : The outbreak of the 2019 novel coronavirus disease (COVID-19) not only caused physical abnormalities, but also caused psychological distress, especially for undergraduate students who are facing the pressure of academic study and work. We aimed to explore the prevalence rate of probable anxiety and probable insomnia and to find the risk factors among a longitudinal study of undergraduate students using the approach of machine learning.

Methods : The baseline data (T1) were collected from freshmen who underwent psychological evaluation at two months after entering the university. At T2 stage (February 10th to 13th, 2020), we used a convenience cluster sampling to assess psychological state (probable anxiety was assessed by general anxiety disorder-7 and probable insomnia was assessed by insomnia severity index-7) based on a web survey. We integrated information attained at T1 stage to predict probable anxiety and probable insomnia at T2 stage using a machine learning algorithm (XGBoost).

Results : Finally, we included 2009 students (response rate: 80.36%). The prevalence rate of probable anxiety and probable insomnia was 12.49% and 16.87%, respectively. The XGBoost algorithm predicted 1954 out of 2009 students (translated into 97.3% accuracy) and 1932 out of 2009 students (translated into 96.2% accuracy) who suffered anxiety and insomnia symptoms, respectively. The most relevant variables in predicting probable anxiety included romantic relationship, suicidal ideation, sleep symptoms, and a history of anxiety symptoms. The most relevant variables in predicting probable insomnia included aggression, psychotic experiences, suicidal ideation, and romantic relationship.

Conclusion : Risks for probable anxiety and probable insomnia among undergraduate students can be identified at an individual level by baseline data. Thus, timely psychological intervention for anxiety and insomnia symptoms among undergraduate students is needed considering the above factors.

Ge Fenfen, Zhang Di, Wu Lianhai, Mu Hongwei

2020

COVID-19, anxiety, cohort, insomnia, machine learning

Radiology Radiology

Deep learning-based triage and analysis of lesion burden for COVID-19: a retrospective study with external validation.

In The Lancet. Digital health

Background : Prompt identification of patients suspected to have COVID-19 is crucial for disease control. We aimed to develop a deep learning algorithm on the basis of chest CT for rapid triaging in fever clinics.

Methods : We trained a U-Net-based model on unenhanced chest CT scans obtained from 2447 patients admitted to Tongji Hospital (Wuhan, China) between Feb 1, 2020, and March 3, 2020 (1647 patients with RT-PCR-confirmed COVID-19 and 800 patients without COVID-19) to segment lung opacities and alert cases with COVID-19 imaging manifestations. The ability of artificial intelligence (AI) to triage patients suspected to have COVID-19 was assessed in a large external validation set, which included 2120 retrospectively collected consecutive cases from three fever clinics inside and outside the epidemic centre of Wuhan (Tianyou Hospital [Wuhan, China; area of high COVID-19 prevalence], Xianning Central Hospital [Xianning, China; area of medium COVID-19 prevalence], and The Second Xiangya Hospital [Changsha, China; area of low COVID-19 prevalence]) between Jan 22, 2020, and Feb 14, 2020. To validate the sensitivity of the algorithm in a larger sample of patients with COVID-19, we also included 761 chest CT scans from 722 patients with RT-PCR-confirmed COVID-19 treated in a makeshift hospital (Guanggu Fangcang Hospital, Wuhan, China) between Feb 21, 2020, and March 6, 2020. Additionally, the accuracy of AI was compared with a radiologist panel for the identification of lesion burden increase on pairs of CT scans obtained from 100 patients with COVID-19.

Findings : In the external validation set, using radiological reports as the reference standard, AI-aided triage achieved an area under the curve of 0·953 (95% CI 0·949-0·959), with a sensitivity of 0·923 (95% CI 0·914-0·932), specificity of 0·851 (0·842-0·860), a positive predictive value of 0·790 (0·777-0·803), and a negative predictive value of 0·948 (0·941-0·954). AI took a median of 0·55 min (IQR: 0·43-0·63) to flag a positive case, whereas radiologists took a median of 16·21 min (11·67-25·71) to draft a report and 23·06 min (15·67-39·20) to release a report. With regard to the identification of increases in lesion burden, AI achieved a sensitivity of 0·962 (95% CI 0·947-1·000) and a specificity of 0·875 (95 %CI 0·833-0·923). The agreement between AI and the radiologist panel was high (Cohen's kappa coefficient 0·839, 95% CI 0·718-0·940).

Interpretation : A deep learning algorithm for triaging patients with suspected COVID-19 at fever clinics was developed and externally validated. Given its high accuracy across populations with varied COVID-19 prevalence, integration of this system into the standard clinical workflow could expedite identification of chest CT scans with imaging indications of COVID-19.

Funding : Special Project for Emergency of the Science and Technology Department of Hubei Province, China.

Wang Minghuan, Xia Chen, Huang Lu, Xu Shabei, Qin Chuan, Liu Jun, Cao Ying, Yu Pengxin, Zhu Tingting, Zhu Hui, Wu Chaonan, Zhang Rongguo, Chen Xiangyu, Wang Jianming, Du Guang, Zhang Chen, Wang Shaokang, Chen Kuan, Liu Zheng, Xia Liming, Wang Wei

2020-Oct

Cardiology Cardiology

Co-authorship network analysis in cardiovascular research utilizing machine learning (2009-2019).

In International journal of medical informatics ; h5-index 49.0

BACKGROUND : With the recent advances in computational science, machine-learning methods have been increasingly used in medical research. Because such projects usually require both a clinician and a computational data scientist, there is a need for interdisciplinary research collaboration. However, there has been no published analysis of research collaboration networks in cardiovascular medicine using machine intelligence.

METHODS : Co-authorship network analysis was conducted on 2857 research articles published between 2009 and 2019. Bibliographic data were collected from the Web of Science, and the co-authorship networks were represented as undirected multigraphs. The network density, average degree, clustering coefficient, and number of communities were calculated, and the chronological changes were assessed. Thereafter, the leading authors were identified according to the centrality metrics. Finally, we investigated the significance of the characteristics of the co-authorship network in the largest component via a Monte Carlo simulation with the Barabasi-Albert model.

RESULTS : The co-authorship network of the entire period consisted of 13,979 nodes and 68,668 weighted edges. A time-series analysis revealed a linear correlation between the number of nodes and the number of edges (R2 = 0.9937, p < 0.001). Additionally, the number of communities was linearly correlated with the number of nodes (R2 = 0.9788, p < 0.001). The average shortest path increased by a greater degree than the logarithm of the number of nodes, indicating the scale-free structure of the network. We identified D. Berman as the most central author with regard to the degree centrality and closeness centrality. S. Neubauer was the top-ranking author with regard to the betweenness centrality. Among the 22 authors who were ranked in the top 10 for any centrality, 14 authors (63.6%) had a medical degree (medical doctor, MD). The remaining eight non-MD researchers had a PhD in computational science-related fields. The number of communities detected in the Barabasi-Albert model simulation was similar to that for the largest component of the real network (6.21 ± 0.07 vs. 6, p = 0.096).

CONCLUSIONS : A co-authorship network analysis revealed a structure of collaboration networks in the application of machine learning in the field of cardiovascular disease, which can be useful for planning future scientific collaboration.

Higaki Akinori, Uetani Teruyoshi, Ikeda Shuntaro, Yamaguchi Osamu

2020-Sep-19

Cardiovascular research, Co-authorship network analysis, Machine learning, Natural language processing, Social network analysis

General General

Construction of a convolutional neural network classifier developed by computed tomography images for pancreatic cancer diagnosis.

In World journal of gastroenterology ; h5-index 103.0

BACKGROUND : Efforts should be made to develop a deep-learning diagnosis system to distinguish pancreatic cancer from benign tissue due to the high morbidity of pancreatic cancer.

AIM : To identify pancreatic cancer in computed tomography (CT) images automatically by constructing a convolutional neural network (CNN) classifier.

METHODS : A CNN model was constructed using a dataset of 3494 CT images obtained from 222 patients with pathologically confirmed pancreatic cancer and 3751 CT images from 190 patients with normal pancreas from June 2017 to June 2018. We established three datasets from these images according to the image phases, evaluated the approach in terms of binary classification (i.e., cancer or not) and ternary classification (i.e., no cancer, cancer at tail/body, cancer at head/neck of the pancreas) using 10-fold cross validation, and measured the effectiveness of the model with regard to the accuracy, sensitivity, and specificity.

RESULTS : The overall diagnostic accuracy of the trained binary classifier was 95.47%, 95.76%, 95.15% on the plain scan, arterial phase, and venous phase, respectively. The sensitivity was 91.58%, 94.08%, 92.28% on three phases, with no significant differences (χ2 = 0.914, P = 0.633). Considering that the plain phase had same sensitivity, easier access, and lower radiation compared with arterial phase and venous phase , it is more sufficient for the binary classifier. Its accuracy on plain scans was 95.47%, sensitivity was 91.58%, and specificity was 98.27%. The CNN and board-certified gastroenterologists achieved higher accuracies than trainees on plain scan diagnosis (χ2 = 21.534, P < 0.001; χ2 = 9.524, P < 0.05; respectively). However, the difference between CNN and gastroenterologists was not significant (χ2 = 0.759, P = 0.384). In the trained ternary classifier, the overall diagnostic accuracy of the ternary classifier CNN was 82.06%, 79.06%, and 78.80% on plain phase, arterial phase, and venous phase, respectively. The sensitivity scores for detecting cancers in the tail were 52.51%, 41.10% and, 36.03%, while sensitivity for cancers in the head was 46.21%, 85.24% and 72.87% on three phases, respectively. Difference in sensitivity for cancers in the head among the three phases was significant (χ2 = 16.651, P < 0.001), with arterial phase having the highest sensitivity.

CONCLUSION : We proposed a deep learning-based pancreatic cancer classifier trained on medium-sized datasets of CT images. It was suitable for screening purposes in pancreatic cancer detection.

Ma Han, Liu Zhong-Xin, Zhang Jing-Jing, Wu Feng-Tian, Xu Cheng-Fu, Shen Zhe, Yu Chao-Hui, Li You-Ming

2020-Sep-14

Computed tomography, Convolutional neural networks, Deep learning, Pancreatic cancer

General General

Artificial intelligence in COVID-19 drug repurposing.

In The Lancet. Digital health

Drug repurposing or repositioning is a technique whereby existing drugs are used to treat emerging and challenging diseases, including COVID-19. Drug repurposing has become a promising approach because of the opportunity for reduced development timelines and overall costs. In the big data era, artificial intelligence (AI) and network medicine offer cutting-edge application of information science to defining disease, medicine, therapeutics, and identifying targets with the least error. In this Review, we introduce guidelines on how to use AI for accelerating drug repurposing or repositioning, for which AI approaches are not just formidable but are also necessary. We discuss how to use AI models in precision medicine, and as an example, how AI models can accelerate COVID-19 drug repurposing. Rapidly developing, powerful, and innovative AI and network medicine technologies can expedite therapeutic development. This Review provides a strong rationale for using AI-based assistive tools for drug repurposing medications for human disease, including during the COVID-19 pandemic.

Zhou Yadi, Wang Fei, Tang Jian, Nussinov Ruth, Cheng Feixiong

2020-Sep-18

Pathology Pathology

Unsupervised machine learning reveals lesional variability in focal cortical dysplasia at mesoscopic scale.

In NeuroImage. Clinical

OBJECTIVE : Focal cortical dysplasia (FCD) is the most common epileptogenic developmental malformation and a prevalent cause of surgically amenable epilepsy. While cellular and molecular biology data suggest that FCD lesional characteristics lie along a spectrum, this notion remains to be verified in vivo. We tested the hypothesis that machine learning applied to MRI captures FCD lesional variability at a mesoscopic scale.

METHODS : We studied 46 patients with histologically verified FCD Type II and 35 age- and sex-matched healthy controls. We applied consensus clustering, an unsupervised learning technique that identifies stable clusters based on bootstrap-aggregation, to 3 T multicontrast MRI (T1-weighted MRI and FLAIR) features of FCD normalized with respect to distributions in controls.

RESULTS : Lesions were parcellated into four classes with distinct structural profiles variably expressed within and across patients: Class-1 with isolated white matter (WM) damage; Class-2 combining grey matter (GM) and WM alterations; Class-3 with isolated GM damage; Class-4 with GM-WM interface anomalies. Class membership was replicated in two independent datasets. Classes with GM anomalies impacted local function (resting-state fMRI derived ALFF), while those with abnormal WM affected large-scale connectivity (assessed by degree centrality). Overall, MRI classes reflected typical histopathological FCD characteristics: Class-1 was associated with severe WM gliosis and interface blurring, Class-2 with severe GM dyslamination and moderate WM gliosis, Class-3 with moderate GM gliosis, Class-4 with mild interface blurring. A detection algorithm trained on class-informed data outperformed a class-naïve paradigm.

SIGNIFICANCE : Machine learning applied to widely available MRI contrasts uncovers FCD Type II variability at a mesoscopic scale and identifies tissue classes with distinct structural dimensions, functional and histopathological profiles. Integrating in vivo staging of FCD traits with automated lesion detection is likely to inform the development of novel personalized treatments.

Lee Hyo M, Gill Ravnoor S, Fadaie Fatemeh, Cho Kyoo H, Guiot Marie C, Hong Seok-Jun, Bernasconi Neda, Bernasconi Andrea

2020-Sep-18

Cortical dysplasia, Epilepsy, MRI

General General

Inter-subject pattern analysis for multivariate group analysis of functional neuroimaging. A unifying formalization.

In Computer methods and programs in biomedicine

BACKGROUND AND OBJECTIVE : In medical imaging, population studies have to overcome the differences that exist between individuals to identify invariant image features that can be used for diagnosis purposes. In functional neuroimaging, an appealing solution to identify neural coding principles that hold at the population level is inter-subject pattern analysis, i.e. to learn a predictive model on data from multiple subjects and evaluate its generalization performance on new subjects. Although it has gained popularity in recent years, its widespread adoption is still hampered by the blatant lack of a formal definition in the literature. In this paper, we precisely introduce the first principled formalization of inter-subject pattern analysis targeted at multivariate group analysis of functional neuroimaging.

METHODS : We propose to frame inter-subject pattern analysis as a multi-source transductive transfer question, thus grounding it within several well defined machine learning settings and broadening the spectrum of usable algorithms. We describe two sets of inter-subject brain decoding experiments that use several open datasets: a magneto-encephalography study with 16 subjects and a functional magnetic resonance imaging paradigm with 100 subjects. We assess the relevance of our framework by performing model comparisons, where one brain decoding model exploits our formalization while others do not.

RESULTS : The first set of experiments demonstrates the superiority of a brain decoder that uses subject-by-subject standardization compared to state of the art models that use other standardization schemes, making the case for the interest of the transductive and the multi-source components of our formalization The second set of experiments quantitatively shows that, even after such transformation, it is more difficult for a brain decoder to generalize to new participants rather than to new data from participants available in the training phase, thus highlighting the transfer gap that needs to be overcome.

CONCLUSION : This paper describes the first formalization of inter-subject pattern analysis as a multi-source transductive transfer learning problem. We demonstrate the added value of this formalization using proof-of-concept experiments on several complementary functional neuroimaging datasets. This work should contribute to popularize inter-subject pattern analysis for functional neuroimaging population studies and pave the road for future methodological innovations.

Wang Qi, Artières Thierry, Takerkart Sylvain

2020-Sep-11

Functional neuroimaging, Machine learning, Neuroinformatics, Population studies

Surgery Surgery

Clinical data classification using an enhanced SMOTE and chaotic evolutionary feature selection.

In Computers in biology and medicine

Class imbalance and the presence of irrelevant or redundant features in training data can pose serious challenges to the development of a classification framework. This paper proposes a framework for developing a Clinical Decision Support System (CDSS) that addresses class imbalance and the feature selection problem. Under this framework, the dataset is balanced at the data level and a wrapper approach is used to perform feature selection. The following three clinical datasets from the University of California Irvine (UCI) machine learning repository were used for experimentation: the Indian Liver Patient Dataset (ILPD), the Thoracic Surgery Dataset (TSD) and the Pima Indian Diabetes (PID) dataset. The Synthetic Minority Over-sampling Technique (SMOTE), which was enhanced using Orchard's algorithm, was used to balance the datasets. A wrapper approach that uses Chaotic Multi-Verse Optimisation (CMVO) was proposed for feature subset selection. The arithmetic mean of the Matthews correlation coefficient (MCC) and F-score (F1), which was measured using a Random Forest (RF) classifier, was used as the fitness function. After selecting the relevant features, a RF, which comprises 100 estimators and uses the Information Gain Ratio as the split criteria, was used for classification. The classifier achieved a 0.65 MCC, a 0.84 F1 and 82.46% accuracy for the ILPD; a 0.74 MCC, a 0.87 F1 and 86.88% accuracy for the TSD; and a 0.78 MCC, a 0.89 F1and 89.04% accuracy for the PID dataset. The effects of balancing and feature selection on the classifier were investigated and the performance of the framework was compared with the existing works in the literature. The results showed that the proposed framework is competitive in terms of the three performance measures used. The results of a Wilcoxon test confirmed the statistical superiority of the proposed method.

Sreejith S, Khanna Nehemiah H, Kannan A

2020-Sep-18

Chaotic maps, Class imbalance, Classification, Clinical decision support system, Feature selection, Multi Verse Optimisation, SMOTE

General General

A comprehensive review of deep learning in colon cancer.

In Computers in biology and medicine

Deep learning has emerged as a leading machine learning tool in object detection and has attracted attention with its achievements in progressing medical image analysis. Convolutional Neural Networks (CNNs) are the most preferred method of deep learning algorithms for this purpose and they have an essential role in the detection and potential early diagnosis of colon cancer. In this article, we hope to bring a perspective to progress in this area by reviewing deep learning practices for colon cancer analysis. This study first presents an overview of popular deep learning architectures used in colon cancer analysis. After that, all studies related to colon cancer analysis are collected under the field of colon cancer and deep learning, then they are divided into five categories that are detection, classification, segmentation, survival prediction, and inflammatory bowel diseases. Then, the studies collected under each category are summarized in detail and listed. We conclude our work with a summary of recent deep learning practices for colon cancer analysis, a critical discussion of the challenges faced, and suggestions for future research. This study differs from other studies by including 135 recent academic papers, separating colon cancer into five different classes, and providing a comprehensive structure. We hope that this study is beneficial to researchers interested in using deep learning techniques for the diagnosis of colon cancer.

Pacal Ishak, Karaboga Dervis, Basturk Alper, Akay Bahriye, Nalbantoglu Ufuk

2020-Sep-17

Colon cancer, Colorectal cancer, Convolutional neural networks, Deep learning, Inflammatory bowel diseases, Medical image analysis, Rectal cancer

General General

Contrasting gene decay in subterranean vertebrates: insights from cavefishes and fossorial mammals.

In Molecular biology and evolution

Evolution sometimes proceeds by loss, especially when structures and genes become dispensable after an environmental shift relaxes functional constraints. Subterranean vertebrates are outstanding models to analyze this process, and gene decay can serve as a readout. We sought to understand some general principles on the extent and tempo of the decay of genes involved in vision, circadian clock and pigmentation in cavefishes. The analysis of the genomes of two Cuban species belonging to the genus Lucifuga provided evidence for the largest loss of eye-specific genes and non-visual opsin genes reported so far in cavefishes. Comparisons with a recently evolved cave population of Astyanax mexicanus and three species belonging to the Chinese tetraploid genus Sinocyclocheilus revealed the combined effects of the level of eye regression, time and genome ploidy on eye-specific gene pseudogenization. The limited extent of gene decay in all these cavefishes and the very small number of loss of function (LoF) mutations per pseudogene suggest that their eye degeneration may not be very ancient, ranging from early to late Pleistocene. This is in sharp contrast with the identification of several vision genes carrying many LoF mutations in ancient fossorial mammals, further suggesting that blind fishes cannot thrive more than a few million years in cave ecosystems.

Policarpo Maxime, Fumey Julien, Lafargeas Philippe, Naquin Delphine, Thermes Claude, Naville Magali, Dechaud Corentin, Volff Jean-Nicolas, Cabau Cedric, Klopp Christophe, Møller Peter Rask, Bernatchez Louis, García-Machado Erik, Rétaux Sylvie, Casane Didier

2020-Sep-28

cavefishes, eye-specific genes, machine learning, molecular dating, pseudogenization, relaxed selection

General General

On the performance of fusion based planet-scope and Sentinel-2 data for crop classification using inception inspired deep convolutional neural network.

In PloS one ; h5-index 176.0

This research work aims to develop a deep learning-based crop classification framework for remotely sensed time series data. Tobacco is a major revenue generating crop of Khyber Pakhtunkhwa (KP) province of Pakistan, with over 90% of the country's Tobacco production. In order to analyze the performance of the developed classification framework, a pilot sub-region named Yar Hussain is selected for experimentation work. Yar Hussain is a tehsil of district Swabi, within KP province of Pakistan, having highest contribution to the gross production of the KP Tobacco crop. KP generally consists of a diverse crop land with different varieties of vegetation, having similar phenology which makes crop classification a challenging task. In this study, a temporal convolutional neural network (TempCNNs) model is implemented for crop classification, while considering remotely sensed imagery of the selected pilot region with specific focus on the Tobacco crop. In order to improve the performance of the proposed classification framework, instead of using the prevailing concept of utilizing a single satellite imagery, both Sentinel-2 and Planet-Scope imageries are stacked together to assist in providing more diverse features to the proposed classification framework. Furthermore, instead of using a single date satellite imagery, multiple satellite imageries with respect to the phenological cycle of Tobacco crop are temporally stacked together which resulted in a higher temporal resolution of the employed satellite imagery. The developed framework is trained using the ground truth data. The final output is obtained as an outcome of the SoftMax function of the developed model in the form of probabilistic values, for the classification of the selected classes. The proposed deep learning-based crop classification framework, while utilizing multi-satellite temporally stacked imagery resulted in an overall classification accuracy of 98.15%. Furthermore, as the developed classification framework evolved with specific focus on Tobacco crop, it resulted in best Tobacco crop classification accuracy of 99%.

Minallah Nasru, Tariq Mohsin, Aziz Najam, Khan Waleed, Rehman Atiq Ur, Belhaouari Samir Brahim

2020

General General

Automatic tooth roots segmentation of cone beam computed tomography image sequences using U-net and RNN.

In Journal of X-ray science and technology

BACKGROUND : Automatic segmentation of individual tooth root is a key technology for the reconstruction of the three-dimensional dental model from Cone Beam Computed Tomography (CBCT) images, which is of great significance for the orthodontic, implant and other dental diagnosis and treatment planning.

OBJECTIVES : Currently, tooth root segmentation is mainly done manually because of the similar gray of the tooth root and the alveolar bone from CBCT images. This study aims to explore the automatic tooth root segmentation algorithm of CBCT axial image sequence based on deep learning.

METHODS : We proposed a new automatic tooth root segmentation method based on the deep learning U-net with AGs. Since CBCT sequence has a strong correlation between adjacent slices, a Recurrent neural network (RNN) was applied to extract the intra-slice and inter-slice contexts. To develop and test this new method for automatic segmentation of tooth roots using CBCT images, 24 sets of CBCT sequences containing 1160 images and 5 sets of CBCT sequences containing 361 images were used to train and test the network, respectively.

RESULTS : Applying to the testing dataset, the segmentation accuracy measured by the intersection over union (IOU), dice similarity coefficient (DICE), average precision rate (APR), average recall rate (ARR), and average symmetrical surface distance (ASSD) are 0.914, 0.955, 95.8% , 95.3% , 0.145 mm, respectively.

CONCLUSIONS : The study demonstrates that the new method combining attention U-net with RNN yields the promising results of automatic tooth roots segmentation, which has potential to help improve the segmentation efficiency and accuracy in future clinical practice.

Li Qingqing, Chen Ke, Han Lin, Zhuang Yan, Li Jingtao, Lin Jiangli

2020

RNN, Tooth roots, attention U-net, automatic segmentation, cone beam computed tomography

General General

Pointfilter: Point Cloud Filtering via Encoder-Decoder Modeling.

In IEEE transactions on visualization and computer graphics

Point cloud filtering is a fundamental problem in geometry modeling and processing. Despite of advancement in recent years, the existing methods still suffer from two issues: 1) they are either designed without preserving sharp features or or less robust in features preservation; and 2) they usually have many parameters and require tedious parameter tuning. In this paper, we propose a novel deep learning approach that automatically and robustly filters point clouds by removing noise and preserving their sharp features. Our point-wise learning architecture consists of an encoder and a decoder. The encoder directly takes points (a point and its neighbors) as input, and learns a latent representation vector which is going through the decoder to relate the ground-truth position with a displacement vector. The trained neural network can automatically generate a set of clean points from a noisy input. Extensive experiments show that our approach outperforms the state-of-the-art deep learning techniques in terms of both visual quality and quantitative error metrics. We will make our code and dataset publicly available.

Zhang Dongbo, Lu Xuequan, Qin Hong, He Ying

2020-Sep-28

General General

Automated Skin Lesion Segmentation via an Adaptive Dual Attention Module.

In IEEE transactions on medical imaging ; h5-index 74.0

We present a convolutional neural network (CNN) equipped with a novel and efficient adaptive dual attention module (ADAM) for automated skin lesion segmentation from dermoscopic images, which is an essential yet challenging step for the development of a computer-assisted skin disease diagnosis system. The proposed ADAM has three compelling characteristics. First, we integrate two global context modeling mechanisms into the ADAM, one aiming at capturing the boundary continuity of skin lesion by global average pooling while the other dealing with the shape irregularity by pixel-wise correlation. In this regard, our network, thanks to the proposed ADAM, is capable of extracting more comprehensive and discriminative features for recognizing the boundary of skin lesions. Second, the proposed ADAM supports multi-scale resolution fusion, and hence can capture multi-scale features to further improve the segmentation accuracy. Third, as we harness a spatial information weighting method in the proposed network, our method can reduce a lot of redundancies compared with traditional CNNs. The proposed network is implemented based on a dual encoder architecture, which is able to enlarge the receptive field without greatly increasing the network parameters. In addition, we assign different dilation rates to different ADAMs so that it can adaptively capture distinguishing features according to the size of a lesion. We extensively evaluate the proposed method on both ISBI2017 and ISIC2018 datasets and the experimental results demonstrate that, without using network ensemble schemes, our method is capable of achieving better segmentation performance than state-of-the-art deep learning models, particularly those equipped with attention mechanisms.

Wu Huisi, Pan Junquan, Li Zhuoying, Wen Zhenkun, Qin Jing

2020-Sep-28

General General

Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset.

In IEEE transactions on pattern analysis and machine intelligence ; h5-index 127.0

We investigate privacy-preserving action recognition in deep learning, a problem with growing importance in smart camera applications. A novel adversarial training framework is formulated to learn an anonymization transform for input videos such that the trade-off between target utility task performance and the associated privacy budgets is explicitly optimized on the anonymized videos. Notably, the privacy budget, often defined and measured in task-driven contexts, cannot be reliably indicated using any single model performance, because strong protection of privacy should sustain against any malicious model that tries to steal private information. To tackle this problem, we propose two new optimization strategies of model restarting and model ensemble to achieve stronger universal privacy protection against any attacker models. Extensive experiments have been carried out and analyzed. On the other hand, given few public datasets available with both utility and privacy labels, the data-driven (supervised) learning cannot exert its full power on this task. To further address this dataset challenge, we have constructed a new dataset, termed PA-HMDB51, with both target task labels (action) and selected privacy attributes (gender, age, race, nudity, and relationship) annotated on a per-frame basis. This first-of-its-kind video dataset and evaluation protocol can greatly facilitate visual privacy research.

Wu Zhenyu, Wang Haotao, Wang Zhangyang, Wang Zhaowen, Jin Hailin

2020-Sep-28

Surgery Surgery

Potential applications of artificial intelligence in colorectal polyps and cancer: Recent advances and prospects.

In World journal of gastroenterology ; h5-index 103.0

Since the advent of artificial intelligence (AI) technology, it has been constantly studied and has achieved rapid development. The AI assistant system is expected to improve the quality of automatic polyp detection and classification. It could also help prevent endoscopists from missing polyps and make an accurate optical diagnosis. These functions provided by AI could result in a higher adenoma detection rate and decrease the cost of polypectomy for hyperplastic polyps. In addition, AI has good performance in the staging, diagnosis, and segmentation of colorectal cancer. This article provides an overview of recent research focusing on the application of AI in colorectal polyps and cancer and highlights the advances achieved.

Wang Ke-Wei, Dong Ming

2020-Sep-14

Artificial intelligence, Colorectal cancer, Colorectal polyps, Computer-assisted diagnosis, Deep learning

General General

Challenges and Opportunities of Preclinical Medical Education: COVID-19 Crisis and Beyond.

In SN comprehensive clinical medicine

COVID-19 pandemic has disrupted face-to-face teaching in medical schools globally. The use of remote learning as an emergency measure has affected students, faculty, support staff, and administrators. The aim of this narrative review paper is to examine the challenges and opportunities faced by medical schools in implementing remote learning for basic science teaching in response to the COVID-19 crisis. We searched relevant literature in PubMed, Scopus, and Google Scholar using specific keywords, e.g., "COVID-19 pandemic," "preclinical medical education," "online learning," "remote learning," "challenges," and "opportunities." The pandemic has posed several challenges to premedical education (e.g., suspension of face-to-face teaching, lack of cadaveric dissections, and practical/laboratory sessions) but has provided many opportunities as well, such as the incorporation of online learning in the curriculum and upskilling and reskilling in new technologies. To date, many medical schools have successfully transitioned their educational environment to emergency remote teaching and assessments. During COVID-19 crisis, the preclinical phase of medical curricula has successfully introduced the novel culture of "online home learning" using technology-oriented innovations, which may extend to post-COVID era to maintain teaching and learning in medical education. However, the lack of hands-on training in the preclinical years may have serious implications on the training of the current cohort of students, and they may struggle later in the clinical years. The use of emergent technology (e.g., artificial intelligence for adaptive learning, virtual simulation, and telehealth) for education is most likely to be indispensable components of the transformative change and post-COVID medical education.

Gaur Uma, Majumder Md Anwarul Azim, Sa Bidyadhar, Sarkar Sankalan, Williams Arlene, Singh Keerti

2020-Sep-22

COVID-19 pandemic, Challenges, Online learning, Opportunities, Preclinical medical education, Remote learning

General General

Artificial Intelligence-Enabled ECG Algorithm to Identify Patients With Left Ventricular Systolic Dysfunction Presenting to the Emergency Department With Dyspnea.

In Circulation. Arrhythmia and electrophysiology

BACKGROUND : Identification of systolic heart failure among patients presenting to the emergency department (ED) with acute dyspnea is challenging. The reasons for dyspnea are often multifactorial. A focused physical evaluation and diagnostic testing can lack sensitivity and specificity. The objective of this study was to assess the accuracy of an artificial intelligence-enabled ECG to identify patients presenting with dyspnea who have left ventricular systolic dysfunction (LVSD).

METHODS : We retrospectively applied a validated artificial intelligence-enabled ECG algorithm for the identification of LVSD (defined as LV ejection fraction ≤35%) to a cohort of patients aged ≥18 years who were evaluated in the ED at a Mayo Clinic site with dyspnea. Patients were included if they had at least one standard 12-lead ECG acquired on the date of the ED visit and an echocardiogram performed within 30 days of presentation. Patients with prior LVSD were excluded. We assessed the model performance using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity.

RESULTS : A total of 1606 patients were included. Median time from ECG to echocardiogram was 1 day (Q1: 1, Q3: 2). The artificial intelligence-enabled ECG algorithm identified LVSD with an area under the receiver operating characteristic curve of 0.89 (95% CI, 0.86-0.91) and accuracy of 85.9%. Sensitivity, specificity, negative predictive value, and positive predictive value were 74%, 87%, 97%, and 40%, respectively. To identify an ejection fraction <50%, the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity were 0.85 (95% CI, 0.83-0.88), 86%, 63%, and 91%, respectively. NT-proBNP (N-terminal pro-B-type natriuretic peptide) alone at a cutoff of >800 identified LVSD with an area under the receiver operating characteristic curve of 0.80 (95% CI, 0.76-0.84).

CONCLUSIONS : The ECG is an inexpensive, ubiquitous, painless test which can be quickly obtained in the ED. It effectively identifies LVSD in selected patients presenting to the ED with dyspnea when analyzed with artificial intelligence and outperforms NT-proBNP. Graphic Abstract: A graphic abstract is available for this article.

Adedinsewo Demilade, Carter Rickey E, Attia Zachi, Johnson Patrick, Kashou Anthony H, Dugan Jennifer L, Albus Michael, Sheele Johnathan M, Bellolio Fernanda, Friedman Paul A, Lopez-Jimenez Francisco, Noseworthy Peter A

2020-Aug

artificial intelligence, cardiomyopathies, dyspnea, electrocardiogram, heart failure

General General

PI1M: A Benchmark Database for Polymer Informatics.

In Journal of chemical information and modeling

Open source data in large scale are the cornerstones for data-driven research, but they are not readily available for polymers. In this work, we build a benchmark database, called PI1M (referring to ~1 million polymers for polymer informatics), to provide data resources that can be used for machine learning research in polymer informatics. A generative model is trained on ~12,000 polymers manually collected from the largest existing polymer database PolyInfo, and then the model is used to generate ~1 million polymers. A new representation for polymers, polymer embedding (PE), is introduced, which is then used to perform several polymer informatics regression tasks for density, glass transition temperature, melting temperature and dielectric constants. By comparing the PE trained by the PolyInfo data and that by the PI1M data, we conclude that the PI1M database covers similar chemical space as PolyInfo, but significantly populate regions where PolyInfo data are sparse. We believe PI1M will serve as a good benchmark database for future research in polymer informatics.

Ma Ruimin, Luo Tengfei

2020-Sep-28

General General

Deep-Learning-Assisted Assessment of DNA Damage Based on Foci Images and Its Application in High-Content Screening of Lead Compounds.

In Analytical chemistry

DNA damage is one of major culprits in many complex diseases; thus, there is great interest in the discovery of novel lead compounds regulating DNA damage. However, there remain plenty of challenges to evaluate DNA damage through counting the amount of intranuclear foci. Herein, a deep-learning-based open-source pipeline, FociNet, was developed to automatically segment full-field fluorescent images and dissect DNA damage of each cell. We annotated 6000 single-nucleus images to train the classification ability of the proposed computational pipeline. Results showed that FociNet achieved satisfying performance in classifying a single cell into a normal, damaged, or nonsignaling (no fusion-protein expression) state and exhibited excellent compatibility in the assessment of DNA damage based on fluorescent foci images from various imaging platforms. Furthermore, FociNet was employed to analyze a data set of over 5000 foci images from a high-content screening of 315 natural compounds from traditional Chinese medicine. It was successfully applied to identify several novel active compounds including evodiamine, isoliquiritigenin, and herbacetin, which were found to reduce 53BP1 foci for the first time. Among them, isoliquiritigenin from Glycyrrhiza uralensis Fisch. exerts a significant effect on attenuating double strand breaks as indicated by the comet assay. In conclusion, this work provides an artificial intelligence tool to evaluate DNA damage on the basis of microscopy images as well as a potential strategy for high-content screening of active compounds.

Chen Xuechun, Xun Dejin, Zheng Ruzhang, Zhao Lu, Lu Yuqing, Huang Jun, Wang Rui, Wang Yi

2020-Sep-28

General General

Barriers for the Research, Prevention, and Treatment of Suicidal Behavior.

In Current topics in behavioral neurosciences

Efforts in research, prevention, and treatment of suicidal behavior have produced mixed results. One of the main barriers to combating suicidal behavior lies in the very conceptualization of suicide, a phenomenon that is at once sociological, psychiatric, and even philosophical, and one that has not always been included in the field of health care. There are also many barriers at the social level, ranging from stigma against people with suicidal behavior to stigma towards psychiatric care, as well as the controversial role of the media. The media plays an important role in society and depending on its attitude it can be either beneficial or harmful in our fight against suicidal behavior. Differences between countries - in the provision of resources, in the way of understanding the phenomenon or in the manner of providing official figures - pose an additional challenge to suicide prevention on a global level. In the field of research, predicting suicidal behavior by identifying effective risk markers is severely hampered by the low occurrence of suicide in the population, which limits the statistical power of studies. The authors recommend combining various risk factors to build predictive models. This, in addition to employing increasingly precise machine learning techniques, is a step in the right direction, although there is still a long way to go before the expected results can be obtained. Finally, adequate training of health professionals, both specialized and non-specialized, as well as gatekeeper training, is crucial for implementing suicide prevention strategies in the population.

Oquendo Maria A, Porras-Segovia Alejandro

2020-Sep-29

Suicidal behavior, Suicide, Suicide attempt, Suicide prevention

Radiology Radiology

Evaluation of dose reduction potential in scatter-corrected bedside chest radiography using U-net.

In Radiological physics and technology

Bedside radiography has increasingly attracted attention because it allows for immediate image diagnosis after X-ray imaging. Currently, wireless flat-panel detectors (FPDs) are used for digital radiography. However, adjustment of the X-ray tube and FPD alignment are extremely difficult tasks. Furthermore, to prevent a poor image quality caused by scattered X-rays, scatter removal grids are commonly used. In this study, we proposed a scatter-correction processing method to reduce the radiation dose when compared with that required by the X-ray grid for the segmentation of a mass region using deep learning during bedside chest radiography. A chest phantom and an acrylic cylinder simulating the mass were utilized to verify the image quality of the scatter-corrected chest X-rays with a low radiation dose. In addition, we used the peak signal-to-noise ratio and structural similarity to quantitatively assess the quality of the low radiation dose images compared with normal grid images. Furthermore, U-net was used to segment the mass region during the scatter-corrected chest X-ray with a low radiation dose. Our results showed that when scatter correction is used, an image with a quality equivalent to that obtained by grid radiography is produced, even when the imaging dose is reduced by approximately 20%. In addition, image contrast was improved using scatter radiation correction as opposed to using scatter removal grids. Our results can be utilized to further develop bedside chest radiography systems with reduced radiation doses.

Onodera Shu, Lee Yongbum, Tanaka Yoshitaka

2020-Sep-28

Bedside chest radiography, Flat-panel detectors, Radiation dose, Radiographic image enhancement, Scatter correction processing, X-ray scatter removal grids

Radiology Radiology

Practical applications of deep learning: classifying the most common categories of plain radiographs in a PACS using a neural network.

In European radiology ; h5-index 62.0

OBJECTIVES : The goal of the present study was to classify the most common types of plain radiographs using a neural network and to validate the network's performance on internal and external data. Such a network could help improve various radiological workflows.

METHODS : All radiographs from the year 2017 (n = 71,274) acquired at our institution were retrieved from the PACS. The 30 largest categories (n = 58,219, 81.7% of all radiographs performed in 2017) were used to develop and validate a neural network (MobileNet v1.0) using transfer learning. Image categories were extracted from DICOM metadata (study and image description) and mapped to the WHO manual of diagnostic imaging. As an independent, external validation set, we used images from other institutions that had been stored in our PACS (n = 5324).

RESULTS : In the internal validation, the overall accuracy of the model was 90.3% (95%CI: 89.2-91.3%), whereas, for the external validation set, the overall accuracy was 94.0% (95%CI: 93.3-94.6%).

CONCLUSIONS : Using data from one single institution, we were able to classify the most common categories of radiographs with a neural network. The network showed good generalizability on the external validation set and could be used to automatically organize a PACS, preselect radiographs so that they can be routed to more specialized networks for abnormality detection or help with other parts of the radiological workflow (e.g., automated hanging protocols; check if ordered image and performed image are the same). The final AI algorithm is publicly available for evaluation and extension.

KEY POINTS : • Data from one single institution can be used to train a neural network for the correct detection of the 30 most common categories of plain radiographs. • The trained model achieved a high accuracy for the majority of categories and showed good generalizability to images from other institutions. • The neural network is made publicly available and can be used to automatically organize a PACS or to preselect radiographs so that they can be routed to more specialized neural networks for abnormality detection.

Dratsch Thomas, Korenkov Michael, Zopfs David, Brodehl Sebastian, Baessler Bettina, Giese Daniel, Brinkmann Sebastian, Maintz David, Pinto Dos Santos Daniel

2020-Sep-28

Artificial intelligence, Machine learning, Radiography

General General

Bone age assessment based on deep convolution neural network incorporated with segmentation.

In International journal of computer assisted radiology and surgery

PURPOSE : Bone age assessment is not only an important means of assessing maturity of adolescents, but also plays an indispensable role in the fields of orthodontics, kinematics, pediatrics, forensic science, etc. Most studies, however, do not take into account the impact of background noise on the results of the assessment. In order to obtain accurate bone age, this paper presents an automatic assessment method, for bone age based on deep convolutional neural networks.

METHOD : Our method was divided into two phases. In the image segmentation stage, the segmentation network U-Net was used to acquire the mask image which was then compared with the original image to obtain the hand bone portion after removing the background interference. For the classification phase, in order to further improve the evaluation performance, an attention mechanism was added on the basis of Visual Geometry Group Network (VGGNet). Attention mechanisms can help the model invest more resources in important areas of the hand bone.

RESULT : The assessment model was tested on the RSNA2017 Pediatric Bone Age dataset. The results show that our adjusted model outperforms the VGGNet. The mean absolute error can reach 9.997 months, which outperforms other common methods for bone age assessment.

CONCLUSION : We explored the establishment of an automated bone age assessment method based on deep learning. This method can efficiently eliminate the influence of background interference on bone age evaluation, improve the accuracy of bone age evaluation, provide important reference value for bone age determination, and can aid in the prevention of adolescent growth and development diseases.

Gao Yunyuan, Zhu Tao, Xu Xiaohua

2020-Sep-28

Attention mechanism, Bone age assessment, Deep learning, Segmentation

General General

Large-scale functional coactivation patterns reflect the structural connectivity of the medial prefrontal cortex.

In Social cognitive and affective neuroscience ; h5-index 61.0

The medial prefrontal cortex (MPFC) is among the most consistently implicated brain regions in social and affective neuroscience. Yet, this region is also highly functionally heterogeneous across many domains and has diverse patterns of connectivity. The extent to which the communication of functional networks in this area is facilitated by its underlying structural connectivity fingerprint is critical for understanding how psychological phenomena are represented within this region. In the current study, we combined diffusion magnetic resonance imaging and probabilistic tractography with large-scale meta-analysis to investigate the degree to which the functional co-activation patterns of the MPFC is reflected in its underlying structural connectivity. Using unsupervised machine learning techniques, we compared parcellations between the two modalities and found congruence between parcellations at multiple spatial scales. Additionally, using connectivity and coactivation similarity analyses, we found high correspondence in voxel-to-voxel similarity between each modality across most, but not all, subregions of the MPFC. These results provide evidence that meta-analytic functional coactivation patterns are meaningfully constrained by underlying neuroanatomical connectivity and provide convergent evidence of distinct subregions within the MPFC involved in affective processing and social cognition.

Tovar Dale T, Chavez Robert S

2020-Sep-28

dMRI, fMRI, meta-analysis, unsupervised learning

General General

Marrying Medical Domain Knowledge With Deep Learning on Electronic Health Records: A Deep Visual Analytics Approach.

In Journal of medical Internet research ; h5-index 88.0

BACKGROUND : Deep learning models have attracted significant interest from health care researchers during the last few decades. There have been many studies that apply deep learning to medical applications and achieve promising results. However, there are three limitations to the existing models: (1) most clinicians are unable to interpret the results from the existing models, (2) existing models cannot incorporate complicated medical domain knowledge (eg, a disease causes another disease), and (3) most existing models lack visual exploration and interaction. Both the electronic health record (EHR) data set and the deep model results are complex and abstract, which impedes clinicians from exploring and communicating with the model directly.

OBJECTIVE : The objective of this study is to develop an interpretable and accurate risk prediction model as well as an interactive clinical prediction system to support EHR data exploration, knowledge graph demonstration, and model interpretation.

METHODS : A domain-knowledge-guided recurrent neural network (DG-RNN) model is proposed to predict clinical risks. The model takes medical event sequences as input and incorporates medical domain knowledge by attending to a subgraph of the whole medical knowledge graph. A global pooling operation and a fully connected layer are used to output the clinical outcomes. The middle results and the parameters of the fully connected layer are helpful in identifying which medical events cause clinical risks. DG-Viz is also designed to support EHR data exploration, knowledge graph demonstration, and model interpretation.

RESULTS : We conducted both risk prediction experiments and a case study on a real-world data set. A total of 554 patients with heart failure and 1662 control patients without heart failure were selected from the data set. The experimental results show that the proposed DG-RNN outperforms the state-of-the-art approaches by approximately 1.5%. The case study demonstrates how our medical physician collaborator can effectively explore the data and interpret the prediction results using DG-Viz.

CONCLUSIONS : In this study, we present DG-Viz, an interactive clinical prediction system, which brings together the power of deep learning (ie, a DG-RNN-based model) and visual analytics to predict clinical risks and visually interpret the EHR prediction results. Experimental results and a case study on heart failure risk prediction tasks demonstrate the effectiveness and usefulness of the DG-Viz system. This study will pave the way for interactive, interpretable, and accurate clinical risk predictions.

Li Rui, Yin Changchang, Yang Samuel, Qian Buyue, Zhang Ping

2020-Sep-28

electronic health records, interpretable deep learning, knowledge graph, visual analytics

oncology Oncology

Machine-Learning Models for Multicenter Prostate Cancer Treatment Plans.

In Journal of computational biology : a journal of computational molecular cell biology

Clinical factors, including T-stage, Gleason score, and baseline prostate-specific antigen, are used to stratify patients with prostate cancer (PCa) into risk groups. This provides prognostic information for a heterogeneous disease such as PCa and guides treatment selection. In this article, we hypothesize that nonclinical factors may also impact treatment selection and their adherence to treatment guidelines. A total of 552 patients with intermediate- and high-risk PCa treated with definitive radiation with or without androgen deprivation therapy (ADT) between 2010 and 2017 were identified from 34 medical centers within the Veterans Health Administration. Medical charts were manually reviewed, and details regarding each patient's clinical history and treatment were extracted. Support Vector Machine and Random forest-based classification was used to identify clinical and nonclinical predictors of adherence to the treatment guidelines from the National Comprehensive Cancer Network (NCCN). We created models for predicting both initial treatment intent and treatment alterations. Our results demonstrate that besides clinical factors, the center in which the patient was treated (nonclinical factor) played a significant role in adherence to NCCN guidelines. Furthermore, the treatment center served as an important predictor to decide on whether or not to prescribe ADT; however, it was not associated with ADT duration and weakly associated with treatment alterations. Such center-bias motivates further investigation on details of center-specific barriers to both NCCN guideline adherence and on oncological outcomes. In addition, we demonstrate that publicly available data sets, for example, that from Surveillance, Epidemiology, and End Results (SEERs), may not be well equipped to build such predictive models on treatment plans.

Syed Khajamoinuddin, Sleeman William, Soni Payal, Hagan Michael, Palta Jatinder, Kapoor Rishabh, Ghosh Preetam

2020-Sep-25

SEER, SVM, androgen deprivation therapy, localized prostate cancer, radiation therapy, random forests

General General

Forecasting emergency department overcrowding: A deep learning framework.

In Chaos, solitons, and fractals

As the demand for medical cares has considerably expanded, the issue of managing patient flow in hospitals and especially in emergency departments (EDs) is certainly a key issue to be carefully mitigated. This can lead to overcrowding and the degradation of the quality of the provided medical services. Thus, the accurate modeling and forecasting of ED visits are critical for efficiently managing the overcrowding problems and enable appropriate optimization of the available resources. This paper proposed an effective method to forecast daily and hourly visits at an ED using Variational AutoEncoder (VAE) algorithm. Indeed, the VAE model as a deep learning-based model has gained special attention in features extraction and modeling due to its distribution-free assumptions and superior nonlinear approximation. Two types of forecasting were conducted: one- and multi-step-ahead forecasting. To the best of our knowledge, this is the first time that the VAE is investigated to improve forecasting of patient arrivals time-series data. Data sets from the pediatric emergency department at Lille regional hospital center, France, are employed to evaluate the forecasting performance of the introduced method. The VAE model was evaluated and compared with seven methods namely Recurrent Neural Network (RNN), Long short-term memory (LSTM), Bidirectional LSTM (BiLSTM), Convolutional LSTM Network (ConvLSTM), restricted Boltzmann machine (RBM), Gated recurrent units (GRUs), and convolutional neural network (CNN). The results clearly show the promising performance of these deep learning models in forecasting ED visits and emphasize the better performance of the VAE in comparison to the other models.

Harrou Fouzi, Dairi Abdelkader, Kadri Farid, Sun Ying

2020-Oct

Deep learning, ED demands, Emergency departments, Forecasting, Patient flows

General General

Machine Learning and Image Analysis Applications in the Fight against COVID-19 Pandemic: Datasets, Research Directions, Challenges and Opportunities.

In Materials today. Proceedings

COVID-19 pandemic has become the most devastating disease of the current century and spread over 216 countries around the world. The disease is spreading through outbreaks despite the availability of modern sophisticated medical treatment. Machine Learning and Image Analysis research has been making great progress in many directions in the healthcare field for providing support to subsequent medical diagnosis. In this paper, we have propose three research directions with methodologies in the fight against the pandemic namely: Chest X-Ray (CXR) images classification using deep convolution neural networks with transfer learning to assist diagnosis; Patient Risk prediction of pandemic based on risk factors such as patient characteristics, comorbidities, initial symptoms, vital signs for prognosis of disease; and forecasting of disease spread & case fatality rate using deep neural networks. Further, some of the challenges, open datasets and opportunities are discussed for researchers.

Somasekar J, Pavan Kumar Visulaization P, Sharma Avinash, Ramesh G

2020-Sep-22

COVID-19, Chest X-Ray Images, Classification, Diagnosis, Machine Learning, medical image analysis

General General

Machine Learning-Assisted Raman Spectroscopy for pH and Lactate Sensing in Body Fluids.

In Analytical chemistry

This study presents the combination of Raman spectroscopy with machine learning algorithms as a prospective diagnostic tool capable of detecting and monitoring relevant variations of pH and lactate as recognized biomarkers of several pathologies. The applicability of the method proposed here is tested both in vitro and ex vivo. In a first step, Raman spectra of aqueous solutions are evaluated for the identification of characteristic patterns resulting from changes in pH or in the concentration of lactate. The method is further validated with blood and plasma samples. Principal component analysis is used to highlight the relevant features that differentiate the Raman spectra regarding their pH and concentration of lactate. Partial least squares regression models are developed to capture and model the spectral variability of the Raman spectra. The performance of these predictive regression models is demonstrated by clinically accurate predictions of pH and lactate from unknown samples in the physiologically relevant range. These results prove the potential of our method to develop a noninvasive technology, based on Raman spectroscopy, for continuous monitoring of pH and lactate in vivo.

Olaetxea Ion, Valero Ana, Lopez Eneko, Lafuente Héctor, Izeta Ander, Jaunarena Ibon, Seifert Andreas

2020-Sep-28

General General

Machine learning assistive rapid, label-free molecular phenotyping of blood with two-dimensional NMR correlational spectroscopy.

In Communications biology

Translation of the findings in basic science and clinical research into routine practice is hampered by large variations in human phenotype. Developments in genotyping and phenotyping, such as proteomics and lipidomics, are beginning to address these limitations. In this work, we developed a new methodology for rapid, label-free molecular phenotyping of biological fluids (e.g., blood) by exploiting the recent advances in fast and highly efficient multidimensional inverse Laplace decomposition technique. We demonstrated that using two-dimensional T1-T2 correlational spectroscopy on a single drop of blood (<5 μL), a highly time- and patient-specific 'molecular fingerprint' can be obtained in minutes. Machine learning techniques were introduced to transform the NMR correlational map into user-friendly information for point-of-care disease diagnostic and monitoring. The clinical utilities of this technique were demonstrated through the direct analysis of human whole blood in various physiological (e.g., oxygenated/deoxygenated states) and pathological (e.g., blood oxidation, hemoglobinopathies) conditions.

Peng Weng Kung, Ng Tian-Tsong, Loh Tze Ping

2020-Sep-28

General General

Pattern recognition of the fluid flow in a 3D domain by combination of Lattice Boltzmann and ANFIS methods.

In Scientific reports ; h5-index 158.0

Many numerical methods have been used to simulate the fluid flow pattern in different industrial devices. However, they are limited with modeling of complex geometries, numerical stability and expensive computational time for computing, and large hard drive. The evolution of artificial intelligence (AI) methods in learning large datasets with massive inputs and outputs of CFD results enables us to present completely artificial CFD results without existing numerical method problems. As AI methods can not feel barriers in numerical methods, they can be used as an assistance tool beside numerical methods to predict the process in complex geometries and unstable numerical regions within the short computational time. In this study, we use an adaptive neuro-fuzzy inference system (ANFIS) in the prediction of fluid flow pattern recognition in the 3D cavity. This prediction overview can reduce the computational time for visualization of fluid in the 3D domain. The method of ANFIS is used to predict the flow in the cavity and illustrates some artificial cavities for a different time. This method is also compared with the genetic algorithm fuzzy inference system (GAFIS) method for the assessment of numerical accuracy and prediction capability. The result shows that the ANFIS method is very successful in the estimation of flow compared with the GAFIS method. However, the GAFIS can provide faster training and prediction platform compared with the ANFIS method.

Babanezhad Meisam, Nakhjiri Ali Taghvaie, Marjani Azam, Shirazian Saeed

2020-Sep-28

General General

Fragment Mass Spectrum Prediction Facilitates Site Localization of Phosphorylation.

In Journal of proteome research

Liquid chromatography tandem mass spectrometry (LC-MS/MS) has been the most widely used technology for phosphoproteomics studies. As an alternative to database searching and probability-based phosphorylation site localization approaches, spectral library searching has been proved to be effective in the identification of phosphopeptides. However, incompletion of experimental spectral libraries limits the identification capability. Herein, we utilize MS/MS spectrum prediction coupled with spectral matching for site localization of phosphopeptides. In silico MS/MS spectra are generated from peptide sequences by deep learning/machine learning models trained with non-phosphopeptides. Then mass shift according to phosphorylation sites, phosphoric acid neutral loss and a "budding" strategy are adopted to adjust the in silico mass spectra. In silico MS/MS spectra can also be generated in one step for phosphopeptides using models trained with phosphopeptides. The method is benchmarked on data sets of synthetic phosphopeptides and is used to process real biological samples. It is demonstrated to be a method requiring only computational resources that supplements the probability-based approaches for phosphorylation site localization of singly and multiply phosphorylated peptides.

Yang Yi, Horvatovich Péter, Qiao Liang

2020-Sep-28

General General

Artificial Intelligence Approaches to Social Determinants of Cognitive Impairment and Its Associated Conditions.

In Dementia and neurocognitive disorders

BACKGROUND AND PURPOSE : This study uses an artificial-intelligence model (recurrent neural network) for evaluating the following hypothesis: social determinants of disease association in a middle-aged or old population are different across gender and age groups. Here, the disease association indicates an association among cerebrovascular disease, hearing loss and cognitive impairment.

METHODS : Data came from the Korean Longitudinal Study of Ageing (2014-2016), with 6,060 participants aged 53 years or more, that is, 2,556 men, 3,504 women, 3,640 aged 70 years or less (70-), 2,420 aged 71 years or more (71+). The disease association was divided into 8 categories: 1 category for having no disease, 3 categories for having 1, 3 categories for having 2, and 1 category for having 3. Variable importance, the effect of a variable on model performance, was used for finding important social determinants of the disease association in a particular gender/age group, and evaluating the hypothesis above.

RESULTS : Based on variable importance from the recurrent neural network, important social determinants of the disease association were different across gender and age groups: 1) leisure activity for men; 2) parents alive, income and economic activity for women; 3) children alive, education and family activity for 70-; and 4) brothers/sisters cohabiting, religious activity and leisure activity for 70+.

CONCLUSIONS : The findings of this study support the hypothesis, suggesting the development of new guidelines reflecting different social determinants of the disease association across gender and age groups.

Lee Kwang Sig, Park Kun Woo

2020-Sep

Age, Cerebrovascular Disease, Cognitive Impairment, Gender, Hearing Loss, Social Determinant

General General

Application of artificial intelligence models and optimization algorithms in plant cell and tissue culture.

In Applied microbiology and biotechnology

Artificial intelligence (AI) models and optimization algorithms (OA) are broadly employed in different fields of technology and science and have recently been applied to improve different stages of plant tissue culture. The usefulness of the application of AI-OA has been demonstrated in the prediction and optimization of length and number of microshoots or roots, biomass in plant cell cultures or hairy root culture, and optimization of environmental conditions to achieve maximum productivity and efficiency, as well as classification of microshoots and somatic embryos. Despite its potential, the use of AI and OA in this field has been limited due to complex definition terms and computational algorithms. Therefore, a systematic review to unravel modeling and optimizing methods is important for plant researchers and has been acknowledged in this study. First, the main steps for AI-OA development (from data selection to evaluation of prediction and classification models), as well as several AI models such as artificial neural networks (ANNs), neurofuzzy logic, support vector machines (SVMs), decision trees, random forest (FR), and genetic algorithms (GA), have been represented. Then, the application of AI-OA models in different steps of plant tissue culture has been discussed and highlighted. This review also points out limitations in the application of AI-OA in different plant tissue culture processes and provides a new view for future study objectives. KEY POINTS: • Artificial intelligence models and optimization algorithms can be considered a novel and reliable computational method in plant tissue culture. • This review provides the main steps and concepts for model development. • The application of machine learning algorithms in different steps of plant tissue culture has been discussed and highlighted.

Hesami Mohsen, Jones Andrew Maxwell Phineas

2020-Sep-28

Androgenesis, Computational approach, Data-driven model, Embryogenesis, In vitro culture, Machine learning algorithm, Organogenesis, Plant biotechnology, Rhizogenesis, Shoot proliferation

General General

A machine learning-based clinical decision support system to identify prescriptions with a high risk of medication error.

In Journal of the American Medical Informatics Association : JAMIA

OBJECTIVE : To improve patient safety and clinical outcomes by reducing the risk of prescribing errors, we tested the accuracy of a hybrid clinical decision support system in prioritizing prescription checks.

MATERIALS AND METHODS : Data from electronic health records were collated over a period of 18 months. Inferred scores at a patient level (probability of a patient's set of active orders to require a pharmacist review) were calculated using a hybrid approach (machine learning and a rule-based expert system). A clinical pharmacist analyzed randomly selected prescription orders over a 2-week period to corroborate our findings. Predicted scores were compared with the pharmacist's review using the area under the receiving-operating characteristic curve and area under the precision-recall curve. These metrics were compared with existing tools: computerized alerts generated by a clinical decision support (CDS) system and a literature-based multicriteria query prioritization technique. Data from 10 716 individual patients (133 179 prescription orders) were used to train the algorithm on the basis of 25 features in a development dataset.

RESULTS : While the pharmacist analyzed 412 individual patients (3364 prescription orders) in an independent validation dataset, the areas under the receiving-operating characteristic and precision-recall curves of our digital system were 0.81 and 0.75, respectively, thus demonstrating greater accuracy than the CDS system (0.65 and 0.56, respectively) and multicriteria query techniques (0.68 and 0.56, respectively).

DISCUSSION : Our innovative digital tool was notably more accurate than existing techniques (CDS system and multicriteria query) at intercepting potential prescription errors.

CONCLUSIONS : By primarily targeting high-risk patients, this novel hybrid decision support system improved the accuracy and reliability of prescription checks in a hospital setting.

Corny Jennifer, Rajkumar Asok, Martin Olivier, Dode Xavier, Lajonchère Jean-Patrick, Billuart Olivier, Bézie Yvonnick, Buronfosse Anne

2020-Sep-27

clinical, clinical pharmacy information systems, decision support systems, electronic prescribing, medication errors, supervised machine learning

General General

Functional analysis of BRCA1 RING domain variants: computationally derived structural data can improve upon experimental features for training predictive models.

In Integrative biology : quantitative biosciences from nano to macro

Advancements in the interpretation of variants of unknown significance are critical for improving clinical outcomes. In a recent study, massive parallel assays were used to experimentally quantify the effects of missense substitutions in the RING domain of BRCA1 on E3 ubiquitin ligase activity as well as BARD1 RING domain binding. These attributes were subsequently used for training a predictive model of homology-directed DNA repair levels for these BRCA1 variants relative to wild type, which is critical for tumor suppression. Here, relative structural changes characterizing BRCA1 variants were quantified by using an efficient and cost-free computational mutagenesis technique, and we show that these features lead to improvements in model performance. This work underscores the potential for bench researchers to gain valuable insights from computational tools, prior to implementing costly and time-consuming experiments.

Masso Majid

2020-Sep-28

computational mutagenesis, homology-directed DNA repair, machine learning, prediction, structure–function relationships, variants

General General

Clinical features of COVID-19 mortality: development and validation of a clinical prediction model.

In The Lancet. Digital health

Background : The COVID-19 pandemic has affected millions of individuals and caused hundreds of thousands of deaths worldwide. Predicting mortality among patients with COVID-19 who present with a spectrum of complications is very difficult, hindering the prognostication and management of the disease. We aimed to develop an accurate prediction model of COVID-19 mortality using unbiased computational methods, and identify the clinical features most predictive of this outcome.

Methods : In this prediction model development and validation study, we applied machine learning techniques to clinical data from a large cohort of patients with COVID-19 treated at the Mount Sinai Health System in New York City, NY, USA, to predict mortality. We analysed patient-level data captured in the Mount Sinai Data Warehouse database for individuals with a confirmed diagnosis of COVID-19 who had a health system encounter between March 9 and April 6, 2020. For initial analyses, we used patient data from March 9 to April 5, and randomly assigned (80:20) the patients to the development dataset or test dataset 1 (retrospective). Patient data for those with encounters on April 6, 2020, were used in test dataset 2 (prospective). We designed prediction models based on clinical features and patient characteristics during health system encounters to predict mortality using the development dataset. We assessed the resultant models in terms of the area under the receiver operating characteristic curve (AUC) score in the test datasets.

Findings : Using the development dataset (n=3841) and a systematic machine learning framework, we developed a COVID-19 mortality prediction model that showed high accuracy (AUC=0·91) when applied to test datasets of retrospective (n=961) and prospective (n=249) patients. This model was based on three clinical features: patient's age, minimum oxygen saturation over the course of their medical encounter, and type of patient encounter (inpatient vs outpatient and telehealth visits).

Interpretation : An accurate and parsimonious COVID-19 mortality prediction model based on three features might have utility in clinical settings to guide the management and prognostication of patients affected by this disease. External validation of this prediction model in other populations is needed.

Funding : National Institutes of Health.

Yadaw Arjun S, Li Yan-Chak, Bose Sonali, Iyengar Ravi, Bunyavanich Supinda, Pandey Gaurav

2020-Oct

Radiology Radiology

Deep learning-based triage and analysis of lesion burden for COVID-19: a retrospective study with external validation.

In The Lancet. Digital health

Background : Prompt identification of patients suspected to have COVID-19 is crucial for disease control. We aimed to develop a deep learning algorithm on the basis of chest CT for rapid triaging in fever clinics.

Methods : We trained a U-Net-based model on unenhanced chest CT scans obtained from 2447 patients admitted to Tongji Hospital (Wuhan, China) between Feb 1, 2020, and March 3, 2020 (1647 patients with RT-PCR-confirmed COVID-19 and 800 patients without COVID-19) to segment lung opacities and alert cases with COVID-19 imaging manifestations. The ability of artificial intelligence (AI) to triage patients suspected to have COVID-19 was assessed in a large external validation set, which included 2120 retrospectively collected consecutive cases from three fever clinics inside and outside the epidemic centre of Wuhan (Tianyou Hospital [Wuhan, China; area of high COVID-19 prevalence], Xianning Central Hospital [Xianning, China; area of medium COVID-19 prevalence], and The Second Xiangya Hospital [Changsha, China; area of low COVID-19 prevalence]) between Jan 22, 2020, and Feb 14, 2020. To validate the sensitivity of the algorithm in a larger sample of patients with COVID-19, we also included 761 chest CT scans from 722 patients with RT-PCR-confirmed COVID-19 treated in a makeshift hospital (Guanggu Fangcang Hospital, Wuhan, China) between Feb 21, 2020, and March 6, 2020. Additionally, the accuracy of AI was compared with a radiologist panel for the identification of lesion burden increase on pairs of CT scans obtained from 100 patients with COVID-19.

Findings : In the external validation set, using radiological reports as the reference standard, AI-aided triage achieved an area under the curve of 0·953 (95% CI 0·949-0·959), with a sensitivity of 0·923 (95% CI 0·914-0·932), specificity of 0·851 (0·842-0·860), a positive predictive value of 0·790 (0·777-0·803), and a negative predictive value of 0·948 (0·941-0·954). AI took a median of 0·55 min (IQR: 0·43-0·63) to flag a positive case, whereas radiologists took a median of 16·21 min (11·67-25·71) to draft a report and 23·06 min (15·67-39·20) to release a report. With regard to the identification of increases in lesion burden, AI achieved a sensitivity of 0·962 (95% CI 0·947-1·000) and a specificity of 0·875 (95 %CI 0·833-0·923). The agreement between AI and the radiologist panel was high (Cohen's kappa coefficient 0·839, 95% CI 0·718-0·940).

Interpretation : A deep learning algorithm for triaging patients with suspected COVID-19 at fever clinics was developed and externally validated. Given its high accuracy across populations with varied COVID-19 prevalence, integration of this system into the standard clinical workflow could expedite identification of chest CT scans with imaging indications of COVID-19.

Funding : Special Project for Emergency of the Science and Technology Department of Hubei Province, China.

Wang Minghuan, Xia Chen, Huang Lu, Xu Shabei, Qin Chuan, Liu Jun, Cao Ying, Yu Pengxin, Zhu Tingting, Zhu Hui, Wu Chaonan, Zhang Rongguo, Chen Xiangyu, Wang Jianming, Du Guang, Zhang Chen, Wang Shaokang, Chen Kuan, Liu Zheng, Xia Liming, Wang Wei

2020-Oct

General General

The Responsibility of Social Media in Times of Societal and Political Manipulation.

In European journal of operational research

The way electorates were influenced to vote for the Brexit referendum, and in presidential elections both in Brazil and the USA, has accelerated a debate about whether and how machine learning techniques can influence citizens' decisions. The access to balanced information is endangered if digital political manipulation can influence voters. The techniques of profiling and targeting on social media platforms can be used for advertising as well as for propaganda: Through tracking of a person's online behaviour, algorithms of social media platforms can create profiles of users. These can be used for the provision of recommendations or pieces of information to specific target groups. As a result, propaganda and disinformation can influence the opinions and (election) decisions of voters much more powerfully than previously. In order to counter disinformation and societal polarization, the paper proposes a responsibility-based approach for social media platforms in diverse political contexts. Based on the implementation requirements of the "Ethics Guidelines for Trustworthy Artificial Intelligence" of the European Commission, the ethical principles will be operationalized, as far as they are directly relevant for the safeguarding of democratic societies. The resulting suggestions show how the social media platform providers can minimize risks for societies through responsible action in the fields of human rights, education and transparency of algorithmic decisions.

Reisach Ulrike

2020-Sep-22

Artificial intelligence, Behavioral OR, Decision-making, Education, Ethics in OR

General General

Mitigating the Impact of the Novel Coronavirus Pandemic on Neuroscience and Music Research Protocols in Clinical Populations.

In Frontiers in psychology ; h5-index 92.0

The COVID-19 disease and the systemic responses to it has impacted lives, routines and procedures at an unprecedented level. While medical care and emergency response present immediate needs, the implications of this pandemic will likely be far-reaching. Most practices that the clinical research within neuroscience and music field rely on, take place in hospitals or closely connected clinical settings which have been hit hard by the contamination. So too have its preventive and treatment measures. This means that clinical research protocols may have been altered, postponed or put in complete jeopardy. In this context, we would like to present and discuss the problems arising under the current crisis. We do so by critically approaching an online discussion facilitated by an expert panel in the field of music and neuroscience. This effort is hoped to provide an efficient basis to orient ourselves as we begin to map the needs and elements in this field of research as we further propose ideas and solutions on how to overcome, or at least ease the problems and questions we encounter or will encounter, with foresight. Among others, we hope to answer questions on technical or social problems that can be expected, possible solutions and preparatory steps to take in order to improve or ease research implementation, ethical implications and funding considerations. Finally, we further hope to facilitate the process of creating new protocols in order to minimize the impact of this crisis on essential research which may have the potential to relieve health systems.

Papatzikis Efthymios, Zeba Fathima, Särkämö Teppo, Ramirez Rafael, Grau-Sánchez Jennifer, Tervaniemi Mari, Loewy Joanne

2020

COVID-19, music and neuroscience, music and neuroscience research protocols, music therapy, research crisis response

General General

Micro-Facial Expression Recognition in Video Based on Optimal Convolutional Neural Network (MFEOCNN) Algorithm

ArXiv Preprint

Facial expression is a standout amongst the most imperative features of human emotion recognition. For demonstrating the emotional states facial expressions are utilized by the people. In any case, recognition of facial expressions has persisted a testing and intriguing issue with regards to PC vision. Recognizing the Micro-Facial expression in video sequence is the main objective of the proposed approach. For efficient recognition, the proposed method utilizes the optimal convolution neural network. Here the proposed method considering the input dataset is the CK+ dataset. At first, by means of Adaptive median filtering preprocessing is performed in the input image. From the preprocessed output, the extracted features are Geometric features, Histogram of Oriented Gradients features and Local binary pattern features. The novelty of the proposed method is, with the help of Modified Lion Optimization (MLO) algorithm, the optimal features are selected from the extracted features. In a shorter computational time, it has the benefits of rapidly focalizing and effectively acknowledging with the aim of getting an overall arrangement or idea. Finally, the recognition is done by Convolution Neural network (CNN). Then the performance of the proposed MFEOCNN method is analysed in terms of false measures and recognition accuracy. This kind of emotion recognition is mainly used in medicine, marketing, E-learning, entertainment, law and monitoring. From the simulation, we know that the proposed approach achieves maximum recognition accuracy of 99.2% with minimum Mean Absolute Error (MAE) value. These results are compared with the existing for MicroFacial Expression Based Deep-Rooted Learning (MFEDRL), Convolutional Neural Network with Lion Optimization (CNN+LO) and Convolutional Neural Network (CNN) without optimization. The simulation of the proposed method is done in the working platform of MATLAB.

S. D. Lalitha, K. K. Thyagharajan

2020-09-29

General General

Artificial intelligence in COVID-19 drug repurposing.

In The Lancet. Digital health

Drug repurposing or repositioning is a technique whereby existing drugs are used to treat emerging and challenging diseases, including COVID-19. Drug repurposing has become a promising approach because of the opportunity for reduced development timelines and overall costs. In the big data era, artificial intelligence (AI) and network medicine offer cutting-edge application of information science to defining disease, medicine, therapeutics, and identifying targets with the least error. In this Review, we introduce guidelines on how to use AI for accelerating drug repurposing or repositioning, for which AI approaches are not just formidable but are also necessary. We discuss how to use AI models in precision medicine, and as an example, how AI models can accelerate COVID-19 drug repurposing. Rapidly developing, powerful, and innovative AI and network medicine technologies can expedite therapeutic development. This Review provides a strong rationale for using AI-based assistive tools for drug repurposing medications for human disease, including during the COVID-19 pandemic.

Zhou Yadi, Wang Fei, Tang Jian, Nussinov Ruth, Cheng Feixiong

2020-Sep-18

General General

Challenges and Opportunities of Preclinical Medical Education: COVID-19 Crisis and Beyond.

In SN comprehensive clinical medicine

COVID-19 pandemic has disrupted face-to-face teaching in medical schools globally. The use of remote learning as an emergency measure has affected students, faculty, support staff, and administrators. The aim of this narrative review paper is to examine the challenges and opportunities faced by medical schools in implementing remote learning for basic science teaching in response to the COVID-19 crisis. We searched relevant literature in PubMed, Scopus, and Google Scholar using specific keywords, e.g., "COVID-19 pandemic," "preclinical medical education," "online learning," "remote learning," "challenges," and "opportunities." The pandemic has posed several challenges to premedical education (e.g., suspension of face-to-face teaching, lack of cadaveric dissections, and practical/laboratory sessions) but has provided many opportunities as well, such as the incorporation of online learning in the curriculum and upskilling and reskilling in new technologies. To date, many medical schools have successfully transitioned their educational environment to emergency remote teaching and assessments. During COVID-19 crisis, the preclinical phase of medical curricula has successfully introduced the novel culture of "online home learning" using technology-oriented innovations, which may extend to post-COVID era to maintain teaching and learning in medical education. However, the lack of hands-on training in the preclinical years may have serious implications on the training of the current cohort of students, and they may struggle later in the clinical years. The use of emergent technology (e.g., artificial intelligence for adaptive learning, virtual simulation, and telehealth) for education is most likely to be indispensable components of the transformative change and post-COVID medical education.

Gaur Uma, Majumder Md Anwarul Azim, Sa Bidyadhar, Sarkar Sankalan, Williams Arlene, Singh Keerti

2020-Sep-22

COVID-19 pandemic, Challenges, Online learning, Opportunities, Preclinical medical education, Remote learning

General General

Use of computational intelligence techniques to predict flooding in places adjacent to the Magdalena River.

In Heliyon

Floods are one of the worst natural disasters in the world. Colombia is a country that has been greatly affected by this disaster. For example, in the years 2010 and 2011 there was a heavy rainy season, which caused floods that affected at least two million people and there were economic losses of 6.5 million dollars, which is equivalent to 5.7% of the country's Gross Domestic Product (GDP) at that time. The Magdalena River is the most important since 128 municipalities and 43 cities with a population of 6.3 million people, which is 13% of the total population of the country, are located in its basins. For this reason, the objective of the research is to design and implement a model that helps predict flooding over the Magdalena River by examining three techniques of artificial intelligence (Artificial Neuronal Networks, Adaptive Neuro Fuzzy Inference System, Support Vector Machine), and thus determining which of these techniques are the most effective according to the case study. The research was limited only to these three types, due to limitations of time, data, human and financial resources, and technological infrastructure. In the end, it is concluded that the Artificial Neural Networks technique is a suitable option to implement the predictive system as long as it is not very complex and does not require high processing machine. However, to establish a model based on rules to achieve a better interpretability of the floods, the ANFIS model can be used.

Moreno Jenny Marcela, Sánchez Juan Manuel, Espitia Helbert Eduardo

2020-Sep

Artificial Neural Networks, Artificial intelligence, Climate variability, Climatology, Computer engineering, Control systems, Earth sciences, Environmental economics, Environmental science, Flood, Magdalena River, Neuro Fuzzy Systems, Support Vector Machine, Systems engineering

Radiology Radiology

The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database.

In NPJ digital medicine

At the beginning of the artificial intelligence (AI)/machine learning (ML) era, the expectations are high, and experts foresee that AI/ML shows potential for diagnosing, managing and treating a wide variety of medical conditions. However, the obstacles for implementation of AI/ML in daily clinical practice are numerous, especially regarding the regulation of these technologies. Therefore, we provide an insight into the currently available AI/ML-based medical devices and algorithms that have been approved by the US Food & Drugs Administration (FDA). We aimed to raise awareness of the importance of regulatory bodies, clearly stating whether a medical device is AI/ML based or not. Cross-checking and validating all approvals, we identified 64 AI/ML based, FDA approved medical devices and algorithms. Out of those, only 29 (45%) mentioned any AI/ML-related expressions in the official FDA announcement. The majority (85.9%) was approved by the FDA with a 510(k) clearance, while 8 (12.5%) received de novo pathway clearance and one (1.6%) premarket approval (PMA) clearance. Most of these technologies, notably 30 (46.9%), 16 (25.0%), and 10 (15.6%) were developed for the fields of Radiology, Cardiology and Internal Medicine/General Practice respectively. We have launched the first comprehensive and open access database of strictly AI/ML-based medical technologies that have been approved by the FDA. The database will be constantly updated.

Benjamens Stan, Dhunnoo Pranavsingh, Meskó Bertalan

2020

Health services, Outcomes research

General General

Data acquisition of timed-up and go test with older adults: accelerometer, magnetometer, electrocardiography and electroencephalography sensors' data.

In Data in brief

We present a dataset related to the acquisition of different sensors data during the performance of the Timed-Up and Go test with the mobile device positioned in a waistband for the acquisition of accelerometer and magnetometer data, and a BITalino device positioned in a chest band for the acquisition of Electrocardiography and Electroencephalography for further processing. The data acquired from the BITalino device is acquired simultaneously by a Bluetooth connection with the same mobile application. The data was acquired in five institutions, including Centro Comunitário das Lameiras, Lar Nossa Senhora de Fátima, Centro Comunitário das Minas da Panasqueira, Lar da Misericórdia da Santa Casa da Misericórdia do Fundão, and Lar da Aldeia de Joanes da Santa Casa da Misericórdia do Fundão from Fundão and Covilhã municipalities (Portugal). This article describes the data acquired from a several subjects from the different institutions for the acquisition of accelerometer and magnetometer data, where each person performed the Timed-Up and Go test three times, where each output from the sensors was acquired with a sampling rate of 100 Hz. Related to the data acquired by the sensors connected to the BITalino device, 31 persons performed the different experiments related to the Timed-Up and Go Test. Following the data acquired from Electroencephalography and Electrocardiography sensors, only the data acquired from 14 individuals was considered valid. The data acquired by a BITalino device has a sampling rate of 100 Hz. These data can be reused for testing machine learning methods for the evaluation of the performance of the Timed-Up and Go test with older adults.

Ponciano Vasco, Pires Ivan Miguel, Ribeiro Fernando Reinaldo, Garcia Nuno M

2020-Oct

Accelerometer, Electrocardiography, Electroencephalography, Health, Magnetometer, Mobile devices, Sensors, Timed-up and go test

General General

Synthetic database of space objects encounter events subject to epistemic uncertainty.

In Data in brief

The databases included on this article refers to variables and parameters belonging to the Space Traffic Management (STM), Evidence Theory and Machine Learning (ML) fields. They have been used for implementing ML for autonomously predict risk associated to a close encounter between two space (Sanchez and Vasile, On the Use of Machine Learning and Evidence Theory to Improve Collision Risk Management, Acta Astronautica, Special Issue for ICSSA2020, In Press [1]). The position of the objected is assumed to be affected by epistemic uncertainty, which has been modeled according to Dempster-Shafer Evidence theory (DSt) [2]. Six datasets are presented. Two (DB1 and DB2, respectively) include samples of space object close encounters subject to epistemic uncertainty on the relative position. Other two databases (DB3 and DB4, respectively) include the values of the Cumulative Plausibility and Belief Curves (CPC and CBC, respectively) of each sample included in DB1. The remaining databases (DB5 and DB6), contain the value of the CPC and CBC of each sample included in DB2. All of them are synthetic databases created using computer simulation to obtain the results presented in [1]. DB1 database is constituted by 9,000 samples and 45 columns and a header, while DB2 is formed by 28,800 samples and 45 columns and a header. These databases come from a set of, respectively, 5 and 14 different families of encounter geometries defined by the range of values that can be assigned to the bounds of the intervals for the uncertain variables, assumed to be affected by epistemic uncertainty, considered to have been provided by two sources of information. The uncertain variables are: the miss distance, x, µy], on the impact plane (B plane), the standard deviation of the relative position projected on the B plane, x, σy], and the Hard Body Radius of the combined objects, HBR. The dataset is completed with STM related parameters: miss distance and covariance matrix of the uncertain ellipse projected on the B plane enclosing all samples defined by the uncertainty intervals, the Probability of Collision (PC ) of this ellipse or the elapsed time to the Time of Closest Approach (TCA); with DSt related parameters: Belief and Plausibility of certain values of Pc; and the class of the event according to the classification detailed in [1]. DB3 and DB4 are constituted by 34 columns and 9000 rows containing the Plausibility and Belief for Pc values and the corresponding Probabilities of Collision necessary to build the CPC and CBC of the events in DB1, while DB5 and DB6 are constituted by 34 columns and 28,800 rows containing the Plausibility and Belief for Pc values and the corresponding Probabilities of Collision values necessary to build the CPC and CBC of the events in DB2. These databases have a potential usage by the ML community interested in STM as well as for the space community, especially, space operators interested in introduce epistemic uncertainty on collision risk assessment. These databases contribute to build a scarce field such as the databases of encounter events [3].

Sánchez Luis, Vasile Massimiliano

2020-Oct

Collision risk assessment, Epistemic uncertainty, Evidence theory, Risk assessment, Space traffic management

General General

HoloSelecta dataset: 10'035 GTIN-labelled product instances in vending machines for object detection of packaged products in retail environments.

In Data in brief

To assess the potential of current neural network architectures to reliably identify packaged products within a retail environment, we created an open-source dataset of 295 shelf images of vending machines with 10'035 labelled instances of 109 products. The dataset contains photos of vending machines by the provider Selecta, the largest European operator of vending machines. The vending machines are a mix of machines in public and private office spaces. The vending machines contain food as well as beverage products. The product instances in the vending machine images are labelled with bounding boxes, where a bounding box encapsulates the entire product with as little overlap as possible. The labels corresponding to the bounding box consist of a structured, human-readable labels including brand, product name and size as well as the GTIN of the product. The GTIN is the global standard to identify products in the retail environment and therefore increases the value as a dataset for the retail industry. Contrary to typical object detection datasets that choose labels at a higher level such as a can or bottle for a much wider variety of objects, this dataset chooses a far more detailed label that depends less on the shape but rather on the exact design of the product. The dataset falls into the category of object detection datasets with a large number of objects, which next to the GTIN label, represents a main differentiator of the dataset to other object detection datasets.

Fuchs K, Grundmann T, Haldimann M, Fleisch E

2020-Oct

Computer vision, Deep learning, GTIN, Object detection, Packaged products

General General

MYNursingHome: A fully-labelled image dataset for indoor object classification.

In Data in brief

A fully labelled image dataset serves as a valuable tool for reproducible research inquiries and data processing in various computational areas, such as machine learning, computer vision, artificial intelligence and deep learning. Today's research on ageing is intended to increase awareness on research results and their applications to assist public and private sectors in selecting the right equipments for the elderlies. Many researches related to development of support devices and care equipment had been done to improve the elderly's quality of life. Indoor object detection and classification for autonomous systems require large annotated indoor images for training and testing of smart computer vision applications. This dataset entitled MYNursingHome is an image dataset for commonly used objects surrounding the elderlies in their home cares. Researchers may use this data to build up a recognition aid for the elderlies. This dataset was collected from several nursing homes in Malaysia comprises 37,500 digital images from 25 different indoor object categories including basket bin, bed, bench, cabinet and others.

Ismail Asmida, Ahmad Siti Anom, Che Soh Azura, Hassan Mohd Khair, Harith Hazreen Haizi

2020-Oct

Deep learning, Image dataset, Indoor objects, Object classification, Object detection

Radiology Radiology

The Framingham Heart Study: Populational CT-based phenotyping in the lungs and mediastinum.

In European journal of radiology open

The Framingham Heart Study (FHS) is one of the largest and established longitudinal populational cohorts. CT cohorts of the FHS since 2002 provided a unique opportunity to assess non-cardiac thoracic imaging findings. This review deals with image-based phenotyping studies from recent major publications regarding interstitial lung abnormalities (ILAs), pulmonary cysts, emphysema, pulmonary nodules, pleural plaques, normal spectrum of the thymus, and anterior mediastinal masses, concluding with the discussion of future directions of FHS CT cohorts studies in the era of radiomics and artificial intelligence.

Araki Tetsuro, Washko George R, Schiebler Mark L, O’Connor George T, Hatabu Hiroto

2020

CT, Framingham Heart Study, Interstitial lung abnormalities, Lung, Pleural plaques, Thymus

General General

Inferring Relationship of Blood Metabolic Changes and Average Daily Gain With Feed Conversion Efficiency in Murrah Heifers: Machine Learning Approach.

In Frontiers in veterinary science ; h5-index 25.0

Machine learning algorithms were employed for predicting the feed conversion efficiency (FCE), using the blood parameters and average daily gain (ADG) as predictor variables in buffalo heifers. It was observed that isotonic regression outperformed other machine learning algorithms used in study. Further, we also achieved the best performance evaluation metrics model with additive regression as the meta learner and isotonic regression as the base learner on 10-fold cross-validation and leaving-one-out cross-validation tests. Further, we created three separate partial least square regression (PLSR) models using all 14 parameters of blood and ADG as independent (explanatory) variables and FCE as the dependent variable, to understand the interactions of blood parameters, ADG with FCE each by inclusion of all FCE values (i), only higher FCE values (negative RFI) (ii), and inclusion of only lower FCE (positive RFI) values (iii). The PLSR model including only the higher FCE values was concluded the best, based on performance evaluation metrics as compared to PLSR models developed by inclusion of the lower FCE values and all types of FCE values. IGF1 and its interactions with the other blood parameters were found highly influential for higher FCE measures. The strength of the estimated interaction effects of the blood parameter in relation to FCE may facilitate understanding of intricate dynamics of blood parameters for growth.

Sikka Poonam, Nath Abhigyan, Paul Shyam Sundar, Andonissamy Jerome, Mishra Dwijesh Chandra, Rao Atmakuri Ramakrishna, Balhara Ashok Kumar, Chaturvedi Krishna Kumar, Yadav Keerti Kumar, Balhara Sunesh

2020

blood, buffalo, feed conversion efficiency, partial least square regression, prediction models

General General

The use of artificial intelligence in computed tomography image reconstruction - A literature review.

In Journal of medical imaging and radiation sciences

BACKGROUND AND PURPOSE : The use of AI in the process of CT image reconstruction may improve image quality of resultant images and therefore facilitate low-dose CT examinations.

METHODS : Articles in this review were gathered from multiple databases (Google Scholar, Ovid and Monash University Library Database). A total of 17 articles regarding AI use in CT image reconstruction was reviewed, including 1 white paper from GE Healthcare.

RESULTS : DLR algorithms performed better in terms of noise reduction abilities, and image quality preservation at low doses when compared to other reconstruction techniques.

CONCLUSION : Further research is required to discuss clinical application and diagnostic accuracy of DLR algorithms, but AI is a promising dose-reduction technique with future computational advances.

Zhang Ziyu, Seeram Euclid

2020-Sep-24

Convolutional neural networks, Deep learning, Dose reduction, Generative adversarial networks, Machine learning

General General

An optimized deep learning architecture for the diagnosis of COVID-19 disease based on gravitational search optimization.

In Applied soft computing

In this paper, a novel approach called GSA-DenseNet121-COVID-19 based on a hybrid convolutional neural network (CNN) architecture is proposed using an optimization algorithm. The CNN architecture that was used is called DenseNet121, and the optimization algorithm that was used is called the gravitational search algorithm (GSA). The GSA is used to determine the best values for the hyperparameters of the DenseNet121 architecture. To help this architecture to achieve a high level of accuracy in diagnosing COVID-19 through chest x-ray images. The obtained results showed that the proposed approach could classify 98.38% of the test set correctly. To test the efficacy of the GSA in setting the optimum values for the hyperparameters of DenseNet121. The GSA was compared to another approach called SSD-DenseNet121, which depends on the DenseNet121 and the optimization algorithm called social ski driver (SSD). The comparison results demonstrated the efficacy of the proposed GSA-DenseNet121-COVID-19. As it was able to diagnose COVID-19 better than SSD-DenseNet121 as the second was able to diagnose only 94% of the test set. The proposed approach was also compared to another method based on a CNN architecture called Inception-v3 and manual search to quantify hyperparameter values. The comparison results showed that the GSA-DenseNet121-COVID-19 was able to beat the comparison method, as the second was able to classify only 95% of the test set samples. The proposed GSA-DenseNet121-COVID-19 was also compared with some related work. The comparison results showed that GSA-DenseNet121-COVID-19 is very competitive.

Ezzat Dalia, Hassanien Aboul Ella, Ella Hassan Aboul

2020-Sep-22

Convolutional neural networks, Deep learning, Gravitational search algorithm, Hyperparameters optimization, SARS-CoV-2, Transfer learning

Pathology Pathology

Automatic Segmentation, Localization, and Identification of Vertebrae in 3D CT Images Using Cascaded Convolutional Neural Networks

ArXiv Preprint

This paper presents a method for automatic segmentation, localization, and identification of vertebrae in arbitrary 3D CT images. Many previous works do not perform the three tasks simultaneously even though requiring a priori knowledge of which part of the anatomy is visible in the 3D CT images. Our method tackles all these tasks in a single multi-stage framework without any assumptions. In the first stage, we train a 3D Fully Convolutional Networks to find the bounding boxes of the cervical, thoracic, and lumbar vertebrae. In the second stage, we train an iterative 3D Fully Convolutional Networks to segment individual vertebrae in the bounding box. The input to the second networks have an auxiliary channel in addition to the 3D CT images. Given the segmented vertebra regions in the auxiliary channel, the networks output the next vertebra. The proposed method is evaluated in terms of segmentation, localization, and identification accuracy with two public datasets of 15 3D CT images from the MICCAI CSI 2014 workshop challenge and 302 3D CT images with various pathologies introduced in [1]. Our method achieved a mean Dice score of 96%, a mean localization error of 8.3 mm, and a mean identification rate of 84%. In summary, our method achieved better performance than all existing works in all the three metrics.

Naoto Masuzawa, Yoshiro Kitamura, Keigo Nakamura, Satoshi Iizuka, Edgar Simo-Serra

2020-09-29

General General

Dental Characteristics of Different Types of Cleft and Non-cleft Individuals.

In Frontiers in cell and developmental biology

Objective : The objective of this study was to compare the novel artificial intelligence (A.I.)-driven lateral cephalometric (Late. Ceph.) analysis of 14 different dental characteristics (DC) among different types of cleft lip and palate (CLP) and non-cleft (NC) individuals.

Materials and Methods : A retrospective study was conducted on 123 individuals [31 = NC, 29 = BCLP (bilateral cleft lip and palate), 41 = UCLP (unilateral cleft lip and palate), 9 = UCLA (unilateral cleft lip and alveolus), and 13 = UCL (unilateral cleft lip)] with an average age of 14.77 years. Demographic details were gathered from the clinical records. A novel artificial intelligence-driven Webceph software has been used for the Late. Ceph. analysis. A total of 14 different types of angular and linear DC measurements were analyzed and compared among groups. Two-way ANOVA and multiple-comparison statistics tests were applied to see the differences between gender and among different types of CLP versus NC subjects.

Results : Of the 14 DC tested, no significant gender disparities were found (p > 0.05). In relation to different types of CLP versus NC subjects, 8 over 14 DC were statistically significant (p < 001 to p = 0.03). Six other DC variables show insignificant (p > 0.05) noteworthy alterations in relation to type of CLP.

Conclusion : Based on the results, type of CLP revealed significantly altered DC compared to NC. Among different types of CLP, BCLP exhibited a maximum alteration in different DC.

Alam Mohammad Khursheed, Alfawzan Ahmed Ali

2020

bilateral cleft lip and palate, dental characteristics, incisal display, non-syndromic cleft lip and palate, overbite, overjet, unilateral cleft lip and palate

General General

Named Entity Recognition and Relation Detection for Biomedical Information Extraction.

In Frontiers in cell and developmental biology

The number of scientific publications in the literature is steadily growing, containing our knowledge in the biomedical, health, and clinical sciences. Since there is currently no automatic archiving of the obtained results, much of this information remains buried in textual details not readily available for further usage or analysis. For this reason, natural language processing (NLP) and text mining methods are used for information extraction from such publications. In this paper, we review practices for Named Entity Recognition (NER) and Relation Detection (RD), allowing, e.g., to identify interactions between proteins and drugs or genes and diseases. This information can be integrated into networks to summarize large-scale details on a particular biomedical or clinical problem, which is then amenable for easy data management and further analysis. Furthermore, we survey novel deep learning methods that have recently been introduced for such tasks.

Perera Nadeesha, Dehmer Matthias, Emmert-Streib Frank

2020

artificial intelligence, deep learning, information extraction, named entity recognition, natural language processing, relation detection, text analytics, text mining

General General

Toward a Closed Loop, Integrated Biocompatible Biopolymer Wound Dressing Patch for Detection and Prevention of Chronic Wound Infections.

In Frontiers in bioengineering and biotechnology

Chronic wound infections represent a significant burden to healthcare providers globally. Often, chronic wound healing is impeded by the presence of infection within the wound or wound bed. This can result in an increased healing time, healthcare cost and poor patient outcomes. Thus, there is a need for dressings that help the wound heal, in combination with early detection of wound infections to support prompt treatment. In this study, we demonstrate a novel, biocompatible wound dressing material, based on Polyhydroxyalkanoates, doped with graphene platelets, which can be used as an electrochemical sensing substrate for the detection of a common wound pathogen, Pseudomonas aeruginosa. Through the detection of the redox active secondary metabolite, pyocyanin, we demonstrate that a dressing can be produced that will detect the presence of pyocyanin across clinically relevant concentrations. Furthermore, we show that this sensor can be used to identify the presence of pyocyanin in a culture of P. aeruginosa. Overall, the sensor substrate presented in this paper represents the first step toward a new dressing with the capacity to promote wound healing, detect the presence of infection and release antimicrobial drugs, on demand, to optimized healing.

Ward Andrew C, Dubey Prachi, Basnett Pooja, Lika Granit, Newman Gwenyth, Corrigan Damion K, Russell Christopher, Kim Jongrae, Chakrabarty Samit, Connolly Patricia, Roy Ipsita

2020

Polyhydroxyalkanoates, Pseudomonas aeruginosa, artificial intelligence, biopolymer, electrochemical, graphene, pyocyanin, wound dressing

General General

Automated Detection of Acute Lymphoblastic Leukemia From Microscopic Images Based on Human Visual Perception.

In Frontiers in bioengineering and biotechnology

Microscopic image analysis plays a significant role in initial leukemia screening and its efficient diagnostics. Since the present conventional methodologies partly rely on manual examination, which is time consuming and depends greatly on the experience of domain experts, automated leukemia detection opens up new possibilities to minimize human intervention and provide more accurate clinical information. This paper proposes a novel approach based on conventional digital image processing techniques and machine learning algorithms to automatically identify acute lymphoblastic leukemia from peripheral blood smear images. To overcome the greatest challenges in the segmentation phase, we implemented extensive pre-processing and introduced a three-phase filtration algorithm to achieve the best segmentation results. Moreover, sixteen robust features were extracted from the images in the way that hematological experts do, which significantly increased the capability of the classifiers to recognize leukemic cells in microscopic images. To perform the classification, we applied two traditional machine learning classifiers, the artificial neural network and the support vector machine. Both methods reached a specificity of 95.31%, and the sensitivity of the support vector machine and artificial neural network reached 98.25 and 100%, respectively.

Bodzas Alexandra, Kodytek Pavel, Zidek Jan

2020

acute leukemia, automated leukemia detection, blood smear image analysis, cell segmentation, image processing, leukemic cell identification, machine learning

General General

Exploring the Contribution of Proprioceptive Reflexes to Balance Control in Perturbed Standing.

In Frontiers in bioengineering and biotechnology

Humans control balance using different feedback loops involving the vestibular system, the visual system, and proprioception. In this article, we focus on proprioception and explore the contribution of reflexes based on force and length feedback to standing balance. In particular, we address the questions of how much proprioception alone could explain balance control, and whether one modality, force or length feedback, is more important than the other. A sagittal plane neuro-musculoskeletal model was developed with six degrees of freedom and nine muscles in each leg. A controller was designed using proprioceptive reflexes and a dead zone. No feedback control was applied inside the dead zone. Reflexes were active once the center of mass moved outside the dead zone. Controller parameters were found by solving an optimization problem, where effort was minimized while the neuro-musculoskeletal model should remain standing upright on a perturbed platform. The ground was perturbed with random square pulses in the sagittal plane with different amplitudes and durations. The optimization was solved for three controllers: using force and length feedback (base model), using only force feedback, and using only length feedback. Simulations were compared to human data from previous work, where an experiment with the same perturbation signal was performed. The optimized controller yielded a similar posture, since average joint angles were within 5 degrees of the experimental average joint angles. The joint angles of the base model, the length only model, and the force only model correlated weakly (ankle) to moderately with the experimental joint angles. The ankle moment correlated weakly to moderately with the experimental ankle moment, while the hip and knee moment were only weakly correlated, or not at all. The time series of the joint angles showed that the length feedback model was better able to explain the experimental joint angles than the force feedback model. Changes in time delay affected the correlation of the joint angles and joint moments. The objective of effort minimization yielded lower joint moments than in the experiment, suggesting that other objectives are also important in balance control, which cause an increase in effort and thus larger joint moments.

Koelewijn Anne D, Ijspeert Auke J

2020

balance control, neuromusculoskeletal simulation, perturbed standing, proprioception, reflexes

General General

Multi-Channel Fetal ECG Denoising With Deep Convolutional Neural Networks.

In Frontiers in pediatrics

Non-invasive fetal electrocardiography represents a valuable alternative continuous fetal monitoring method that has recently received considerable attention in assessing fetal health. However, the non-invasive fetal electrocardiogram (ECG) is typically severely contaminated by a considerable amount of various noise sources, rendering fetal ECG denoising a very challenging task. This work employs a deep learning approach for removing the residual noise from multi-channel fetal ECG after the maternal ECG has been suppressed. We propose a deep convolutional encoder-decoder network with symmetric skip-layer connections, learning end-to-end mappings from noise-corrupted fetal ECG signals to clean ones. Experiments on simulated data show an average signal-to-noise ratio (SNR) improvement of 9.5 dB for fetal ECG signals with input SNR ranging between -20 and 20 dB. The method is additionally evaluated on a large set of real signals, demonstrating that it can provide significant quality improvement of the noisy fetal ECG signals. We further show that employment of multi-channel signal information by the network provides superior and more reliable performance as opposed to its single-channel network counterpart. The presented method is able to preserve beat-to-beat morphological variations and does not require any prior information on the power spectra of the noise or the pulse location.

Fotiadou Eleni, Vullings Rik

2020

convolutional neural networks, encoder-decoder network, fetal ECG denoising, fetal ECG enhancement, fetal electrocardiography

Radiology Radiology

Deep Learning-Based Radiomics of B-Mode Ultrasonography and Shear-Wave Elastography: Improved Performance in Breast Mass Classification.

In Frontiers in oncology

Objective : Shear-wave elastography (SWE) can improve the diagnostic specificity of the B-model ultrasonography (US) in breast cancer. However, whether deep learning-based radiomics signatures based on the B-mode US (B-US-RS) or SWE (SWE-RS) could further improve the diagnostic performance remains to be investigated. We aimed to develop the B-US-RS and SWE-RS and determine their performances in classifying breast masses.

Materials and Methods : This retrospective study included 291 women (mean age ± standard deviation, 40.9 ± 12.3 years) from two centers who had US-visible solid breast masses and underwent biopsy and/or surgical resection between June 2015 and July 2017. B-mode US and SWE images of the 198 masses in 198 patients (training cohort) from center 1 were segmented, respectively, to construct B-US-RS and SWE-RS using the least absolute shrinkage and selection operator regression and tested in an independent validation cohort of 65 masses in 65 patients from center 1 and in an external validation cohort of 28 masses in 28 patients from center 2. The performances of B-US-RS and SWE-RS were assessed using receiver operating characteristic (ROC) analysis and compared with that of radiologist assessment [Breast Imaging Reporting and Data System (BI-RADS)] and quantitative SWE parameters [maximum elasticity (Emax), mean elasticity (Emean), elasticity ratio (Eratio), and elastic modulus standard deviation (ESD)] by using the McNemar test.

Results : The single best-performing quantitative SWE parameter, Emax, had a higher specificity than BI-RADS assessment in the training and independent validation cohorts (P < 0.001 for both). The areas under the ROC curves (AUCs) of B-US-RS and SWE-RS both were 0.99 (95% CI = 0.99-1.00) in the training cohort, 1.00 (95% CI = 1.00-1.00) in the independent validation cohort, and 1.00 (95% CI = 1.00-1.00) in the external validation cohort. The specificities of B-US-RS and SWE-RS were higher than that of Emax in the training (P < 0.001 for both) and independent validation cohorts (P = 0.02 for both).

Conclusion : The B-US-RS and SWE-RS outperformed the quantitative SWE parameters and BI-RADS assessment for classifying breast masses. The integration of the deep learning-based radiomics approach would help improve the classification ability of B-mode US and SWE for breast masses.

Zhang Xiang, Liang Ming, Yang Zehong, Zheng Chushan, Wu Jiayi, Ou Bing, Li Haojiang, Wu Xiaoyan, Luo Baoming, Shen Jun

2020

breast neoplasms, deep learning, radiomics, shear-wave elastography, ultrasonography

General General

Artificial Intelligence and Computational Approaches for Epilepsy.

In Journal of epilepsy research

Studies on treatment of epilepsy have been actively conducted in multiple avenues, but there are limitations in improving its efficacy due to between-subject variability in which treatment outcomes vary from patient to patient. Accordingly, there is a growing interest in precision medicine that provides accurate diagnosis for seizure types and optimal treatment for an individual epilepsy patient. Among these approaches, computational studies making this feasible are rapidly progressing in particular and have been widely applied in epilepsy. These computational studies are being conducted in two main streams: 1) artificial intelligence-based studies implementing computational machines with specific functions, such as automatic diagnosis and prognosis prediction for an individual patient, using machine learning techniques based on large amounts of data obtained from multiple patients and 2) patient-specific modeling-based studies implementing biophysical in-silico platforms to understand pathological mechanisms and derive the optimal treatment for each patient by reproducing the brain network dynamics of the particular patient per se based on individual patient's data. These computational approaches are important as it can integrate multiple types of data acquired from patients and analysis results into a single platform. If these kinds of methods are efficiently operated, it would suggest a novel paradigm for precision medicine.

An Sora, Kang Chaewon, Lee Hyang Woon

2020-Jun

Artificial intelligence, Epilepsy, Patient-specific modeling, Precision medicine, Seizures

General General

Machine Learning and Image Analysis Applications in the Fight against COVID-19 Pandemic: Datasets, Research Directions, Challenges and Opportunities.

In Materials today. Proceedings

COVID-19 pandemic has become the most devastating disease of the current century and spread over 216 countries around the world. The disease is spreading through outbreaks despite the availability of modern sophisticated medical treatment. Machine Learning and Image Analysis research has been making great progress in many directions in the healthcare field for providing support to subsequent medical diagnosis. In this paper, we have propose three research directions with methodologies in the fight against the pandemic namely: Chest X-Ray (CXR) images classification using deep convolution neural networks with transfer learning to assist diagnosis; Patient Risk prediction of pandemic based on risk factors such as patient characteristics, comorbidities, initial symptoms, vital signs for prognosis of disease; and forecasting of disease spread & case fatality rate using deep neural networks. Further, some of the challenges, open datasets and opportunities are discussed for researchers.

Somasekar J, Pavan Kumar Visulaization P, Sharma Avinash, Ramesh G

2020-Sep-22

COVID-19, Chest X-Ray Images, Classification, Diagnosis, Machine Learning, medical image analysis

General General

Prophet forecasting model: a machine learning approach to predict the concentration of air pollutants (PM2.5, PM10, O3, NO2, SO2, CO) in Seoul, South Korea.

In PeerJ

Amidst recent industrialization in South Korea, Seoul has experienced high levels of air pollution, an issue that is magnified due to a lack of effective air pollution prediction techniques. In this study, the Prophet forecasting model (PFM) was used to predict both short-term and long-term air pollution in Seoul. The air pollutants forecasted in this study were PM2.5, PM10, O3, NO2, SO2, and CO, air pollutants responsible for numerous health conditions upon long-term exposure. Current chemical models to predict air pollution require complex source lists making them difficult to use. Machine learning models have also been implemented however their requirement of meteorological parameters render the models ineffective as additional models and infrastructure need to be in place to model meteorology. To address this, a model needs to be created that can accurately predict pollution based on time. A dataset containing three years worth of hourly air quality measurements in Seoul was sourced from the Seoul Open Data Plaza. To optimize the model, PFM has the following parameters: model type, changepoints, seasonality, holidays, and error. Cross validation was performed on the 2017-18 data; then, the model predicted 2019 values. To compare the predicted and actual values and determine the accuracy of the model, the statistical indicators: mean squared error (MSE), mean absolute error (MAE), root mean squared error (RMSE), and coverage were used. PFM predicted PM2.5 and PM10 with a MAE value of 12.6 µg/m3 and 19.6 µg/m3, respectively. PFM also predicted SO2 and CO with a MAE value of 0.00124 ppm and 0.207 ppm, respectively. PFM's prediction of PM2.5 and PM10 had a MAE approximately 2 times and 4 times less, respectively, than comparable models. PFM's prediction of SO2and CO had a MAE approximately five times and 50 times less, respectively, than comparable models. In most cases, PFM's ability to accurately forecast the concentration of air pollutants in Seoul up to one year in advance outperformed similar models proposed in literature. This study addresses the limitations of the prior two PFM studies by expanding the modelled air pollutants from three pollutants to six pollutants while increasing the prediction time from 3 days to 1 year. This is also the first research to use PFM in Seoul, Korea. To achieve more accurate results, a larger air pollution dataset needs to be implemented with PFM. In the future, PFM should be used to predict and model air pollution in other regions, especially those without advanced infrastructure to model meteorology alongside air pollution. In Seoul, Seoul's government can use PFM to accurately predict air pollution concentrations and plan accordingly.

Shen Justin, Valagolam Davesh, McCalla Serena

2020

Air pollution, Carbon monoxide, Nitrogen dioxide, Particulate matter, Prediction model, Prophet forecasting model, Seoul, South Korea, Sulfur dioxide, Tropospheric ozone

Public Health Public Health

Artificial intelligence as an analytic approximation to evaluate associations between Parental Feeding Behaviors and Excess Weight in Colombian Preschoolers.

In The British journal of nutrition

Parental practices can affect children's weight and body mass index and may even be related to a high prevalence of obesity. Therefore, the aim of this study was to evaluate the relationship between parents' practices related to feeding their children and excess weight in preschoolers in Bucaramanga, Colombia, using artificial intelligence as an analytical and novel approximation. A Cross-sectional study was carried out between September and December 2017. Sample included preschoolers who attended child development institutions belonging to the Colombian Institute for Family Wellbeing (Instituto Colombiano de Bienestar Familiar (ICBF, Spanish acronym)) in Bucaramanga and the metropolitan area (sample size n=384). Outcome variable was excess weight, defined as body mass index for age. Main independent variable was parental feeding practices. Confounding variables that were analyzed included sociodemographic characteristics, food consumption, and the physical activity of the children. All equipment used for the anthropometric measurements was calibrated. Logistic regression was used to predict the effect of parental practices on the excess weight of the children, and the area under the curve (AUC) was used to measure performance. The parental practices with the greatest association with excess weight in the children involved using food to control their behavior and restricting the amount of food they offered (use of food to control emotions (OR: 1.77; CI 95%: 1.45- 1.83, p=0.034) and encouraging children to eat less (OR: 1.22; CI 95%: 1.14 - 1.89; p= 0.045)). There were no significant differences between fathers and mothers in terms of the use of food to control the behavior of their children or restricting their children's food consumption. Childrearing practices related to feeding were found to be an important predictor of excess weight in children. The results of this study represent implications for the public health considering this as a baseline for the design of nutrition education interventions focused on parents of preschooler vulnerable children.

Gamboa-Delgado Edna Magaly, Amaya-Castellanos Claudia Isabel, Bahamonde Antonio

2020-Sep-28

Child Care, Health Behavior, Health Education, Nutrition, Obesity

General General

Predicting Psychological State Among Chinese Undergraduate Students in the COVID-19 Epidemic: A Longitudinal Study Using a Machine Learning.

In Neuropsychiatric disease and treatment

Background : The outbreak of the 2019 novel coronavirus disease (COVID-19) not only caused physical abnormalities, but also caused psychological distress, especially for undergraduate students who are facing the pressure of academic study and work. We aimed to explore the prevalence rate of probable anxiety and probable insomnia and to find the risk factors among a longitudinal study of undergraduate students using the approach of machine learning.

Methods : The baseline data (T1) were collected from freshmen who underwent psychological evaluation at two months after entering the university. At T2 stage (February 10th to 13th, 2020), we used a convenience cluster sampling to assess psychological state (probable anxiety was assessed by general anxiety disorder-7 and probable insomnia was assessed by insomnia severity index-7) based on a web survey. We integrated information attained at T1 stage to predict probable anxiety and probable insomnia at T2 stage using a machine learning algorithm (XGBoost).

Results : Finally, we included 2009 students (response rate: 80.36%). The prevalence rate of probable anxiety and probable insomnia was 12.49% and 16.87%, respectively. The XGBoost algorithm predicted 1954 out of 2009 students (translated into 97.3% accuracy) and 1932 out of 2009 students (translated into 96.2% accuracy) who suffered anxiety and insomnia symptoms, respectively. The most relevant variables in predicting probable anxiety included romantic relationship, suicidal ideation, sleep symptoms, and a history of anxiety symptoms. The most relevant variables in predicting probable insomnia included aggression, psychotic experiences, suicidal ideation, and romantic relationship.

Conclusion : Risks for probable anxiety and probable insomnia among undergraduate students can be identified at an individual level by baseline data. Thus, timely psychological intervention for anxiety and insomnia symptoms among undergraduate students is needed considering the above factors.

Ge Fenfen, Zhang Di, Wu Lianhai, Mu Hongwei

2020

COVID-19, anxiety, cohort, insomnia, machine learning

Pathology Pathology

A Comprehensive Review for MRF and CRF Approaches in Pathology Image Analysis

ArXiv Preprint

Pathology image analysis is an essential procedure for clinical diagnosis of many diseases. To boost the accuracy and objectivity of detection, nowadays, an increasing number of computer-aided diagnosis (CAD) system is proposed. Among these methods, random field models play an indispensable role in improving the analysis performance. In this review, we present a comprehensive overview of pathology image analysis based on the markov random fields (MRFs) and conditional random fields (CRFs), which are two popular random field models. Firstly, we introduce the background of two random fields and pathology images. Secondly, we summarize the basic mathematical knowledge of MRFs and CRFs from modelling to optimization. Then, a thorough review of the recent research on the MRFs and CRFs of pathology images analysis is presented. Finally, we investigate the popular methodologies in the related works and discuss the method migration among CAD field.

Chen Li, Yixin Li, Changhao Sun, Hao Chen, Hong Zhang

2020-09-29

General General

Applications of Machine Learning in Cardiac Electrophysiology.

In Arrhythmia & electrophysiology review

Artificial intelligence through machine learning (ML) methods is becoming prevalent throughout the world, with increasing adoption in healthcare. Improvements in technology have allowed early applications of machine learning to assist physician efficiency and diagnostic accuracy. In electrophysiology, ML has applications for use in every stage of patient care. However, its use is still in infancy. This article will introduce the potential of ML, before discussing the concept of big data and its pitfalls. The authors review some common ML methods including supervised and unsupervised learning, then examine applications in cardiac electrophysiology. This will focus on surface electrocardiography, intracardiac mapping and cardiac implantable electronic devices. Finally, the article concludes with an overview of how ML may impact on electrophysiology in the future.

Muthalaly Rahul G, Evans Robert M

2020-Aug

Machine learning, ablation, artificial intelligence, big data, cardiac devices, neural network, surface electrocardiography

General General

Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning.

In Frontiers in plant science

Machine learning and computer vision technologies based on high-resolution imagery acquired using unmanned aerial systems (UAS) provide a potential for accurate and efficient high-throughput plant phenotyping. In this study, we developed a sorghum panicle detection and counting pipeline using UAS images based on an integration of image segmentation and a convolutional neural networks (CNN) model. A UAS with an RGB camera was used to acquire images (2.7 mm resolution) at 10-m height in a research field with 120 small plots. A set of 1,000 images were randomly selected, and a mask was developed for each by manually delineating sorghum panicles. These images and their corresponding masks were randomly divided into 10 training datasets, each with a different number of images and masks, ranging from 100 to 1,000 with an interval of 100. A U-Net CNN model was built using these training datasets. The sorghum panicles were detected and counted by a predicted mask through the algorithm. The algorithm was implemented using Python with the Tensorflow library for the deep learning procedure and the OpenCV library for the process of sorghum panicle counting. Results showed the accuracy had a general increasing trend with the number of training images. The algorithm performed the best with 1,000 training images, with an accuracy of 95.5% and a root mean square error (RMSE) of 2.5. The results indicate that the integration of image segmentation and the U-Net CNN model is an accurate and robust method for sorghum panicle counting and offers an opportunity for enhanced sorghum breeding efficiency and accurate yield estimation.

Lin Zhe, Guo Wenxuan

2020

TensorFlow, computer vision, convolutional neural networks, deep learning, image segmentation, python, sorghum panicle, unmanned aerial systems

General General

Volumetric Segmentation of Cell Cycle Markers in Confocal Images Using Machine Learning and Deep Learning.

In Frontiers in plant science

Understanding plant growth processes is important for many aspects of biology and food security. Automating the observations of plant development-a process referred to as plant phenotyping-is increasingly important in the plant sciences, and is often a bottleneck. Automated tools are required to analyze the data in microscopy images depicting plant growth, either locating or counting regions of cellular features in images. In this paper, we present to the plant community an introduction to and exploration of two machine learning approaches to address the problem of marker localization in confocal microscopy. First, a comparative study is conducted on the classification accuracy of common conventional machine learning algorithms, as a means to highlight challenges with these methods. Second, a 3D (volumetric) deep learning approach is developed and presented, including consideration of appropriate loss functions and training data. A qualitative and quantitative analysis of all the results produced is performed. Evaluation of all approaches is performed on an unseen time-series sequence comprising several individual 3D volumes, capturing plant growth. The comparative analysis shows that the deep learning approach produces more accurate and robust results than traditional machine learning. To accompany the paper, we are releasing the 4D point annotation tool used to generate the annotations, in the form of a plugin for the popular ImageJ (FIJI) software. Network models and example datasets will also be available online.

Khan Faraz Ahmad, Voß Ute, Pound Michael P, French Andrew P

2020

annotation, deep learning, machine learning, phenotyping, plant analysis procedures, software

General General

From Microbiome to Traits: Designing Synthetic Microbial Communities for Improved Crop Resiliency.

In Frontiers in plant science

Plants teem with microorganisms, whose tremendous diversity and role in plant-microbe interactions are being increasingly explored. Microbial communities create a functional bond with their hosts and express beneficial traits capable of enhancing plant performance. Therefore, a significant task of microbiome research has been identifying novel beneficial microbial traits that can contribute to crop productivity, particularly under adverse environmental conditions. However, although knowledge has exponentially accumulated in recent years, few novel methods regarding the process of designing inoculants for agriculture have been presented. A recently introduced approach is the use of synthetic microbial communities (SynComs), which involves applying concepts from both microbial ecology and genetics to design inoculants. Here, we discuss how to translate this rationale for delivering stable and effective inoculants for agriculture by tailoring SynComs with microorganisms possessing traits for robust colonization, prevalence throughout plant development and specific beneficial functions for plants. Computational methods, including machine learning and artificial intelligence, will leverage the approaches of screening and identifying beneficial microbes while improving the process of determining the best combination of microbes for a desired plant phenotype. We focus on recent advances that deepen our knowledge of plant-microbe interactions and critically discuss the prospect of using microbes to create SynComs capable of enhancing crop resiliency against stressful conditions.

de Souza Rafael Soares Correa, Armanhi Jaderson Silveira Leite, Arruda Paulo

2020

inoculants, metagenomics, plant growth-promoting (PGP), plant microbiome, synthetic microbial community (SynCom)

General General

Biomarkers of the Response to Immune Checkpoint Inhibitors in Metastatic Urothelial Carcinoma.

In Frontiers in immunology ; h5-index 100.0

The mechanisms underlying the resistance to immune checkpoint inhibitors (ICIs) therapy in metastatic urothelial carcinoma (mUC) patients are not clear. It is of great significance to discern mUC patients who could benefit from ICI therapy in clinical practice. In this study, we performed machine learning method and selected 10 prognostic genes for constructing the immunotherapy response nomogram for mUC patients. The calibration plot suggested that the nomogram had an optimal agreement with actual observations when predicting the 1- and 1.5-year survival probabilities. The prognostic nomogram had a favorable discrimination of overall survival of mUC patients, with area under the curve values of 0.815, 0.752, and 0.805 for ICI response (ICIR) prediction in the training cohort, testing cohort, and combined cohort, respectively. A further decision curve analysis showed that the prognostic nomogram was superior to either mutation burden or neoantigen burden for overall survival prediction when the threshold probability was >0.35. The immune infiltrate analysis indicated that the low ICIR-Score values in mUC patients were significantly related to CD8+ T cell infiltration and immune checkpoint-associated signatures. We also identified differentially mutated genes, which could act as driver genes and regulate the response to ICI therapy. In conclusion, we developed and validated an immunotherapy-responsive nomogram for mUC patients, which could be conveniently used for the estimate of ICI response and the prediction of overall survival probability for mUC patients.

Chen Siteng, Zhang Ning, Wang Tao, Zhang Encheng, Wang Xiang, Zheng Junhua

2020

PD-L1, machine learning, metastatic urothelial carcinoma, nomogram, response

General General

Prediction of Specific TCR-Peptide Binding From Large Dictionaries of TCR-Peptide Pairs.

In Frontiers in immunology ; h5-index 100.0

Current sequencing methods allow for detailed samples of T cell receptors (TCR) repertoires. To determine from a repertoire whether its host had been exposed to a target, computational tools that predict TCR-epitope binding are required. Currents tools are based on conserved motifs and are applied to peptides with many known binding TCRs. We employ new Natural Language Processing (NLP) based methods to predict whether any TCR and peptide bind. We combined large-scale TCR-peptide dictionaries with deep learning methods to produce ERGO (pEptide tcR matchinG predictiOn), a highly specific and generic TCR-peptide binding predictor. A set of standard tests are defined for the performance of peptide-TCR binding, including the detection of TCRs binding to a given peptide/antigen, choosing among a set of candidate peptides for a given TCR and determining whether any pair of TCR-peptide bind. ERGO reaches similar results to state of the art methods in these tests even when not trained specifically for each test. The software implementation and data sets are available at https://github.com/louzounlab/ERGO. ERGO is also available through a webserver at: http://tcr.cs.biu.ac.il/.

Springer Ido, Besser Hanan, Tickotsky-Moskovitz Nili, Dvorkin Shirit, Louzoun Yoram

2020

TCR repertoire analysis, autoencoder (AE), deep learning, epitope specificity, evaluation methods, long short-term memory (LSTM), machine learning

Radiology Radiology

Automatic Vertebral Body Segmentation Based on Deep Learning of Dixon Images for Bone Marrow Fat Fraction Quantification.

In Frontiers in endocrinology ; h5-index 55.0

Background: Bone marrow fat (BMF) fraction quantification in vertebral bodies is used as a novel imaging biomarker to assess and characterize chronic lower back pain. However, manual segmentation of vertebral bodies is time consuming and laborious. Purpose: (1) Develop a deep learning pipeline for segmentation of vertebral bodies using quantitative water-fat MRI. (2) Compare BMF measurements between manual and automatic segmentation methods to assess performance. Materials and Methods: In this retrospective study, MR images using a 3D spoiled gradient-recalled echo (SPGR) sequence with Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) reconstruction algorithm were obtained in 57 subjects (28 women, 29 men, mean age, 47.2 ± 12.6 years). An artificial network was trained for 100 epochs on a total of 165 lumbar vertebrae manually segmented from 31 subjects. Performance was assessed by analyzing the receiver operating characteristic curve, precision-recall, F1 scores, specificity, sensitivity, and similarity metrics. Bland-Altman analysis was used to assess performance of BMF fraction quantification using the predicted segmentations. Results: The deep learning segmentation method achieved an AUC of 0.92 (CI 95%: 0.9186, 0.9195) on a testing dataset (n = 24 subjects) on classification of pixels as vertebrae. A sensitivity of 0.99 and specificity of 0.80 were achieved for a testing dataset, and a mean Dice similarity coefficient of 0.849 ± 0.091. Comparing manual and automatic segmentations on fat fraction maps of lumbar vertebrae (n = 124 vertebral bodies) using Bland-Altman analysis resulted in a bias of only -0.605% (CI 95% = -0.847 to -0.363%) and agreement limits of -3.275% and +2.065%. Automatic segmentation was also feasible in 16 ± 1 s. Conclusion: Our results have demonstrated the feasibility of automated segmentation of vertebral bodies using deep learning models on water-fat MR (Dixon) images to define vertebral regions of interest with high specificity. These regions of interest can then be used to quantify BMF with comparable results as manual segmentation, providing a framework for completely automated investigation of vertebral changes in CLBP.

Zhou Jiamin, Damasceno Pablo F, Chachad Ravi, Cheung Justin R, Ballatori Alexander, Lotz Jeffrey C, Lazar Ann A, Link Thomas M, Fields Aaron J, Krug Roland

2020

biomarkers, bone marrow fat, deep learning, magnetic resonance imaging, segmentation, spine imaging

Radiology Radiology

Functional Outcome Prediction in Ischemic Stroke: A Comparison of Machine Learning Algorithms and Regression Models.

In Frontiers in neurology

Background and Purpose: Stroke-related functional risk scores are used to predict patients' functional outcomes following a stroke event. We evaluate the predictive accuracy of machine-learning algorithms for predicting functional outcomes in acute ischemic stroke patients after endovascular treatment. Methods: Data were from the Precise and Rapid Assessment of Collaterals with Multi-phase CT Angiography (PROVE-IT), an observational study of 614 ischemic stroke patients. Regression and machine learning models, including random forest (RF), classification and regression tree (CART), C5.0 decision tree (DT), support vector machine (SVM), adaptive boost machine (ABM), least absolute shrinkage and selection operator (LASSO) logistic regression, and logistic regression models were used to train and predict the 90-day functional impairment risk, which is measured by the modified Rankin scale (mRS) score > 2. The models were internally validated using split-sample cross-validation and externally validated in the INTERRSeCT cohort study. The accuracy of these models was evaluated using the area under the receiver operating characteristic curve (AUC), Matthews Correlation Coefficient (MCC), and Brier score. Results: Of the 614 patients included in the training data, 249 (40.5%) had 90-day functional impairment (i.e., mRS > 2). The median and interquartile range (IQR) of age and baseline NIHSS scores were 77 years (IQR = 69-83) and 17 (IQR = 11-22), respectively. Both logistic regression and machine learning models had comparable predictive accuracy when validated internally (AUC range = [0.65-0.72]; MCC range = [0.29-0.42]) and externally (AUC range = [0.66-0.71]; MCC range = [0.34-0.42]). Conclusions: Machine learning algorithms and logistic regression had comparable predictive accuracy for predicting stroke-related functional impairment in stroke patients.

Alaka Shakiru A, Menon Bijoy K, Brobbey Anita, Williamson Tyler, Goyal Mayank, Demchuk Andrew M, Hill Michael D, Sajobi Tolulope T

2020

acute ischemic stroke, clinical risk prediction, discrimination calibration, functional outcome, machine learning

General General

Mitigating the Impact of the Novel Coronavirus Pandemic on Neuroscience and Music Research Protocols in Clinical Populations.

In Frontiers in psychology ; h5-index 92.0

The COVID-19 disease and the systemic responses to it has impacted lives, routines and procedures at an unprecedented level. While medical care and emergency response present immediate needs, the implications of this pandemic will likely be far-reaching. Most practices that the clinical research within neuroscience and music field rely on, take place in hospitals or closely connected clinical settings which have been hit hard by the contamination. So too have its preventive and treatment measures. This means that clinical research protocols may have been altered, postponed or put in complete jeopardy. In this context, we would like to present and discuss the problems arising under the current crisis. We do so by critically approaching an online discussion facilitated by an expert panel in the field of music and neuroscience. This effort is hoped to provide an efficient basis to orient ourselves as we begin to map the needs and elements in this field of research as we further propose ideas and solutions on how to overcome, or at least ease the problems and questions we encounter or will encounter, with foresight. Among others, we hope to answer questions on technical or social problems that can be expected, possible solutions and preparatory steps to take in order to improve or ease research implementation, ethical implications and funding considerations. Finally, we further hope to facilitate the process of creating new protocols in order to minimize the impact of this crisis on essential research which may have the potential to relieve health systems.

Papatzikis Efthymios, Zeba Fathima, Särkämö Teppo, Ramirez Rafael, Grau-Sánchez Jennifer, Tervaniemi Mari, Loewy Joanne

2020

COVID-19, music and neuroscience, music and neuroscience research protocols, music therapy, research crisis response

General General

A Brain-Inspired Model of Theory of Mind.

In Frontiers in neurorobotics

Theory of mind (ToM) is the ability to attribute mental states to oneself and others, and to understand that others have beliefs that are different from one's own. Although functional neuroimaging techniques have been widely used to establish the neural correlates implicated in ToM, the specific mechanisms are still not clear. We make our efforts to integrate and adopt existing biological findings of ToM, bridging the gap through computational modeling, to build a brain-inspired computational model for ToM. We propose a Brain-inspired Model of Theory of Mind (Brain-ToM model), and the model is applied to a humanoid robot to challenge the false belief tasks, two classical tasks designed to understand the mechanisms of ToM from Cognitive Psychology. With this model, the robot can learn to understand object permanence and visual access from self-experience, then uses these learned experience to reason about other's belief. We computationally validated that the self-experience, maturation of correlate brain areas (e.g., calculation capability) and their connections (e.g., inhibitory control) are essential for ToM, and they have shown their influences on the performance of the participant robot in false-belief task. The theoretic modeling and experimental validations indicate that the model is biologically plausible, and computationally feasible as a foundation for robot theory of mind.

Zeng Yi, Zhao Yuxuan, Zhang Tielin, Zhao Dongcheng, Zhao Feifei, Lu Enmeng

2020

brain inspired model, connection maturation, false-belief task, inhibitory control, self-experience, theory of mind

General General

Neural Networks based approaches for Major Depressive Disorder and Bipolar Disorder Diagnosis using EEG signals: A review

ArXiv Preprint

Mental disorders represent critical public health challenges as they are leading contributors to the global burden of disease and intensely influence social and financial welfare of individuals. The present comprehensive review concentrate on the two mental disorders: Major depressive Disorder (MDD) and Bipolar Disorder (BD) with noteworthy publications during the last ten years. There's a big need nowadays for phenotypic characterization of psychiatric disorders with biomarkers. Electroencephalography (EEG) signals could offer a rich signature for MDD and BD and then they could improve understanding of pathophysiological mechanisms underling these mental disorders. In this work, we focus on the literature works adopting neural networks fed by EEG signals. Among those studies using EEG and neural networks, we have discussed a variety of EEG based protocols, biomarkers and public datasets for depression and bipolar disorder detection. We conclude with a discussion and valuable recommendations that will help to improve the reliability of developed models and for more accurate and more deterministic computational intelligence based systems in psychiatry. This review will prove to be a structured and valuable initial point for the researchers working on depression and bipolar disorders recognition by using EEG signals.

Sana Yasin, Syed Asad Hussain, Sinem Aslan, Imran Raza, Muhammad Muzammel, Alice Othmani

2020-09-28

General General

Morphological Cell Profiling of SARS-CoV-2 Infection Identifies Drug Repurposing Candidates for COVID-19

bioRxiv Preprint

The global spread of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and the associated disease COVID-19, requires therapeutic interventions that can be rapidly translated to clinical care. Unfortunately, traditional drug discovery methods have a >90% failure rate and can take 10-15 years from target identification to clinical use. In contrast, drug repurposing can significantly accelerate translation. We developed a quantitative high-throughput screen to identify efficacious single agents and combination therapies against SARS-CoV-2. Quantitative high-content morphological profiling was coupled with an AI-based machine learning strategy to classify features of cells for infection and stress. This assay detected multiple antiviral mechanisms of action (MOA), including inhibition of viral entry, propagation, and modulation of host cellular responses. From a library of 1,425 FDA-approved compounds and clinical candidates, we identified 16 dose-responsive compounds with antiviral effects. In particular, we discovered that lactoferrin is an effective inhibitor of SARS-CoV-2 infection with an IC50 of 308 nM and that it potentiates the efficacy of both remdesivir and hydroxychloroquine. Lactoferrin also stimulates an antiviral host cell response and retains inhibitory activity in iPSC-derived alveolar epithelial cells, a model for the primary site of infection. Given its safety profile in humans, these data suggest that lactoferrin is a readily translatable therapeutic adjunct for COVID-19. Additionally, several commonly prescribed drugs were found to exacerbate viral infection and warrant clinical investigation. We conclude that morphological profiling for drug repurposing is an effective strategy for the selection and optimization of drugs and drug combinations as viable therapeutic options for COVID-19 pandemic and other emerging infectious diseases.

Mirabelli, C.; Wotring, J. W.; Zhang, C. J.; McCarty, S. M.; Fursmidt, R.; Frum, T.; Kadambi, N. S.; Amin, A. T.; O’Meara, T. R.; Pretto-Kernahan, C. D.; Spence, J. R.; Huang, J.; Alysandratos, K. D.; Kotton, D. N.; Handelman, S. K.; Wobus, C. E.; Weatherwax, K. J.; Mashour, G. A.; O’Meara, M. J.; Sexton, J. Z.

2020-09-28

Cardiology Cardiology

ECG Classification with a Convolutional Recurrent Neural Network

ArXiv Preprint

We developed a convolutional recurrent neural network to classify 12-lead ECG signals for the challenge of PhysioNet/ Computing in Cardiology 2020 as team Pink Irish Hat. The model combines convolutional and recurrent layers, takes sliding windows of ECG signals as input and yields the probability of each class as output. The convolutional part extracts features from each sliding window. The bi-directional gated recurrent unit (GRU) layer and an attention layer aggregate these features from all windows into a single feature vector. Finally, a dense layer outputs class probabilities. The final decision is made using test time augmentation (TTA) and an optimized decision threshold. Several hyperparameters of our architecture were optimized, the most important of which turned out to be the choice of optimizer and the number of filters per convolutional layer. Our network achieved a challenge score of 0.511 on the hidden validation set and 0.167 on the full hidden test set, ranking us 23rd out of 41 in the official ranking.

Halla Sigurthorsdottir, Jérôme Van Zaen, Ricard Delgado-Gonzalo, Mathieu Lemay

2020-09-28

General General

Diagnosis of Rare Diseases: a scoping review of clinical decision support systems.

In Orphanet journal of rare diseases

BACKGROUND : Rare Diseases (RDs), which are defined as diseases affecting no more than 5 out of 10,000 people, are often severe, chronic and life-threatening. A main problem is the delay in diagnosing RDs. Clinical decision support systems (CDSSs) for RDs are software systems to support clinicians in the diagnosis of patients with RDs. Due to their clinical importance, we conducted a scoping review to determine which CDSSs are available to support the diagnosis of RDs patients, whether the CDSSs are available to be used by clinicians and which functionalities and data are used to provide decision support.

METHODS : We searched PubMed for CDSSs in RDs published between December 16, 2008 and December 16, 2018. Only English articles, original peer reviewed journals and conference papers describing a clinical prototype or a routine use of CDSSs were included. For data charting, we used the data items "Objective and background of the publication/project", "System or project name", "Functionality", "Type of clinical data", "Rare Diseases covered", "Development status", "System availability", "Data entry and integration", "Last software update" and "Clinical usage".

RESULTS : The search identified 636 articles. After title and abstracting screening, as well as assessing the eligibility criteria for full-text screening, 22 articles describing 19 different CDSSs were identified. Three types of CDSSs were classified: "Analysis or comparison of genetic and phenotypic data," "machine learning" and "information retrieval". Twelve of nineteen CDSSs use phenotypic and genetic data, followed by clinical data, literature databases and patient questionnaires. Fourteen of nineteen CDSSs are fully developed systems and therefore publicly available. Data can be entered or uploaded manually in six CDSSs, whereas for four CDSSs no information for data integration was available. Only seven CDSSs allow further ways of data integration. thirteen CDSS do not provide information about clinical usage.

CONCLUSIONS : Different CDSS for various purposes are available, yet clinicians have to determine which is best for their patient. To allow a more precise usage, future research has to focus on CDSSs RDs data integration, clinical usage and updating clinical knowledge. It remains interesting which of the CDSSs will be used and maintained in the future.

Schaaf Jannik, Sedlmayr Martin, Schaefer Johanna, Storf Holger

2020-Sep-24

Clinical decision support systems, Computer-assisted diagnosis, Rare diseases

Public Health Public Health

Objective scoring of streetscape walkability related to leisure walking: Statistical modeling approach with semantic segmentation of Google Street View images.

In Health & place

Although the pedestrian-friendly qualities of streetscapes promote walking, quantitative understanding of streetscape functionality remains insufficient. This study proposed a novel automated method to assess streetscape walkability (SW) using semantic segmentation and statistical modeling on Google Street View images. Using compositions of segmented streetscape elements, such as buildings and street trees, a regression-style model was built to predict SW, scored using a human-based auditing method. Older female active leisure walkers living in Bunkyo Ward, Tokyo, are associated with SW scores estimated by the model (OR = 3.783; 95% CI = 1.459 to 10.409), but male walkers are not.

Nagata Shohei, Nakaya Tomoki, Hanibuchi Tomoya, Amagasa Shiho, Kikuchi Hiroyuki, Inoue Shigeru

2020-Sep-22

Deep learning, Google street view, Neighborhood walkability, Semantic segmentation, Walking behavior

General General

High-content image generation for drug discovery using generative adversarial networks.

In Neural networks : the official journal of the International Neural Network Society

Immense amount of high-content image data generated in drug discovery screening requires computationally driven automated analysis. Emergence of advanced machine learning algorithms, like deep learning models, has transformed the interpretation and analysis of imaging data. However, deep learning methods generally require large number of high-quality data samples, which could be limited during preclinical investigations. To address this issue, we propose a generative modeling based computational framework to synthesize images, which can be used for phenotypic profiling of perturbations induced by drug compounds. We investigated the use of three variants of Generative Adversarial Network (GAN) in our framework, viz., a basic Vanilla GAN, Deep Convolutional GAN (DCGAN) and Progressive GAN (ProGAN), and found DCGAN to be most efficient in generating realistic synthetic images. A pre-trained convolutional neural network (CNN) was used to extract features of both real and synthetic images, followed by a classification model trained on real and synthetic images. The quality of synthesized images was evaluated by comparing their feature distributions with that of real images. The DCGAN-based framework was applied to high-content image data from a drug screen to synthesize high-quality cellular images, which were used to augment the real image data. The augmented dataset was shown to yield better classification performance compared with that obtained using only real images. We also demonstrated the application of proposed method on the generation of bacterial images and computed feature distributions for bacterial images specific to different drug treatments. In summary, our results showed that the proposed DCGAN-based framework can be utilized to generate realistic synthetic high-content images, thus enabling the study of drug-induced effects on cells and bacteria.

Hussain Shaista, Anees Ayesha, Das Ankit, Nguyen Binh P, Marzuki Mardiana, Lin Shuping, Wright Graham, Singhal Amit

2020-Sep-20

Deep learning, Drug discovery, Generative modeling, High-content imaging

Surgery Surgery

Multi-modal infusion pump real-time monitoring technique for improvement in safety of intravenous-administration patients.

In Proceedings of the Institution of Mechanical Engineers. Part H, Journal of engineering in medicine

Intravenous (IV) medication administration processes have been considered as high-risk steps, because accidents during IV administration can lead to serious adverse effects, which can deteriorate the therapeutic effect or threaten the patient's life. In this study, we propose a multi-modal infusion pump (IP) monitoring technique, which can detect mismatches between the IP setting and actual infusion state and between the IP setting and doctor's prescription in real time using a thin membrane potentiometer and convolutional-neural-network-based deep learning technique. During performance evaluation, the percentage errors between the reference infusion rate (IR) and average estimated IR were in the range of 0.50-2.55%, while those between the average actual IR and average estimated IR were in the range of 0.22-2.90%. In addition, the training, validation, and test accuracies of the implemented deep learning model after training were 98.3%, 97.7%, and 98.5%, respectively. The training and validation losses were 0.33 and 0.36, respectively. According to these experimental results, the proposed technique could provide improved protection functions to IV-administration patients.

Hwang Young Jun, Kim Gun Ho, Sung Eui Suk, Nam Kyoung Won

2020-Sep-25

Infusion pump, convolutional neural network, monitoring, patient safety, real-time

General General

Hybrid tensor decomposition in neural network compression.

In Neural networks : the official journal of the International Neural Network Society

Deep neural networks (DNNs) have enabled impressive breakthroughs in various artificial intelligence (AI) applications recently due to its capability of learning high-level features from big data. However, the current demand of DNNs for computational resources especially the storage consumption is growing due to that the increasing sizes of models are being required for more and more complicated applications. To address this problem, several tensor decomposition methods including tensor-train (TT) and tensor-ring (TR) have been applied to compress DNNs and shown considerable compression effectiveness. In this work, we introduce the hierarchical Tucker (HT), a classical but rarely-used tensor decomposition method, to investigate its capability in neural network compression. We convert the weight matrices and convolutional kernels to both HT and TT formats for comparative study, since the latter is the most widely used decomposition method and the variant of HT. We further theoretically and experimentally discover that the HT format has better performance on compressing weight matrices, while the TT format is more suited for compressing convolutional kernels. Based on this phenomenon we propose a strategy of hybrid tensor decomposition by combining TT and HT together to compress convolutional and fully connected parts separately and attain better accuracy than only using the TT or HT format on convolutional neural networks (CNNs). Our work illuminates the prospects of hybrid tensor decomposition for neural network compression.

Wu Bijiao, Wang Dingheng, Zhao Guangshe, Deng Lei, Li Guoqi

2020-Sep-19

Balanced structure, Hierarchical Tucker, Hybrid tensor decomposition, Neural network compression, Tensor-train

General General

Real-time gun detection in CCTV: An open problem.

In Neural networks : the official journal of the International Neural Network Society

Object detectors have improved in recent years, obtaining better results and faster inference time. However, small object detection is still a problem that has not yet a definitive solution. The autonomous weapons detection on Closed-circuit television (CCTV) has been studied recently, being extremely useful in the field of security, counter-terrorism, and risk mitigation. This article presents a new dataset obtained from a real CCTV installed in a university and the generation of synthetic images, to which Faster R-CNN was applied using Feature Pyramid Network with ResNet-50 resulting in a weapon detection model able to be used in quasi real-time CCTV (90 ms of inference time with an NVIDIA GeForce GTX-1080Ti card) improving the state of the art on weapon detection in a two stages training. In this work, an exhaustive experimental study of the detector with these datasets was performed, showing the impact of synthetic datasets on the training of weapons detection systems, as well as the main limitations that these systems present nowadays. The generated synthetic dataset and the real CCTV dataset are available to the whole research community.

Salazar González Jose L, Zaccaro Carlos, Álvarez-García Juan A, Soria Morillo Luis M, Sancho Caparrini Fernando

2020-Sep-17

Convolutional neural network, Data augmentation, Deep learning, Feature Pyramid Network, Synthetic data, Weapon detection

General General

Modeling adult skeletal stem cell response to laser-machined topographies through deep learning.

In Tissue & cell

The response of adult human bone marrow stromal stem cells to surface topographies generated through femtosecond laser machining can be predicted by a deep neural network. The network is capable of predicting cell response to a statistically significant level, including positioning predictions with a probability P < 0.001, and therefore can be used as a model to determine the minimum line separation required for cell alignment, with implications for tissue structure development and tissue engineering. The application of a deep neural network, as a model, reduces the amount of experimental cell culture required to develop an enhanced understanding of cell behavior to topographical cues and, critically, provides rapid prediction of the effects of novel surface structures on tissue fabrication and cell signaling.

Mackay Benita S, Praeger Matthew, Grant-Jacob James A, Kanczler Janos, Eason Robert W, Oreffo Richard O C, Mills Ben

2020-Sep-15

Deep learning, modeling technique, stem cell behavior, topographical cues

General General

An application of generalized matrix learning vector quantization in neuroimaging.

In Computer methods and programs in biomedicine

BACKGROUND AND OBJECTIVE : Neurodegenerative diseases like Parkinson's disease often take several years before they can be diagnosed reliably based on clinical grounds. Imaging techniques such as MRI are used to detect anatomical (structural) pathological changes. However, these kinds of changes are usually seen only late in the development. The measurement of functional brain activity by means of [18F]fluorodeoxyglucose positron emission tomography (FDG-PET) can provide useful information, but its interpretation is more difficult. The scaled sub-profile model principal component analysis (SSM/PCA) was shown to provide more useful information than other statistical techniques. Our objective is to improve the performance further by combining SSM/PCA and prototype-based generalized matrix learning vector quantization (GMLVQ).

METHODS : We apply a combination of SSM/PCA and GMLVQ as a classifier. In order to demonstrate the combination's validity, we analyze FDG-PET data of Parkinson's disease (PD) patients collected at three different neuroimaging centers in Europe. We determine the diagnostic performance by performing a ten times repeated ten fold cross validation. Additionally, discriminant visualizations of the data are included. The prototypes and relevance of GMLVQ are transformed back to the original voxel space by exploiting the linearity of SSM/PCA. The resulting prototypes and relevance profiles have then been assessed by three neurologists.

RESULTS : One important finding is that discriminative visualization can help to identify disease-related properties as well as differences which are due to center-specific factors. Secondly, the neurologist assessed the interpretability of the method and confirmed that prototypes are similar to known activity profiles of PD patients.

CONCLUSION : We have shown that the presented combination of SSM/PCA and GMLVQ can provide useful means to assess and better understand characteristic differences in FDG-PET data from PD patients and HCs. Based on the assessments by medical experts and the results of our computational analysis we conclude that the first steps towards a diagnostic support system have been taken successfully.

van Veen Rick, Gurvits Vita, Kogan Rosalie V, Meles Sanne K, de Vries Gert-Jan, Renken Remco J, Rodriguez-Oroz Maria C, Rodriguez-Rojas Rafael, Arnaldi Dario, Raffa Stefano, de Jong Bauke M, Leenders Klaus L, Biehl Michael

2020-Aug-22

Parkinson’s disease (PD), Scaled sub-profile scaling model principal component analysis (SSM/PCA), [(18)F]Fluorodeoxyglucose positron emission tomography (FDG-PET), generalized matrix learning vector quantization (GMLVQ)

General General

Data-driven ICU management: Using Big Data and algorithms to improve outcomes.

In Journal of critical care ; h5-index 48.0

The digitalization of the Intensive Care Unit (ICU) led to an increasing amount of clinical data being collected at the bedside. The term "Big Data" can be used to refer to the analysis of these datasets that collect enormous amount of data of different origin and format. Complexity and variety define the value of Big Data. In fact, the retrospective analysis of these datasets allows to generate new knowledge, with consequent potential improvements in the clinical practice. Despite the promising start of Big Data analysis in medical research, which has seen a rising number of peer-reviewed articles, very limited applications have been used in ICU clinical practice. A close future effort should be done to validate the knowledge extracted from clinical Big Data and implement it in the clinic. In this article, we provide an introduction to Big Data in the ICU, from data collection and data analysis, to the main successful examples of prognostic, predictive and classification models based on ICU data. In addition, we focus on the main challenges that these models face to reach the bedside and effectively improve ICU care.

Carra Giorgia, Salluh Jorge I F, da Silva Ramos Fernando José, Meyfroidt Geert

2020-Sep-09

Big data, Data mining, Intensive care unit, Machine learning, Predictive modeling

Surgery Surgery

Safety and Accuracy of Robot-Assisted Placement of Pedicle Screws Compared to Conventional Free-Hand Technique: A Systematic Review and Meta-Analysis.

In The spine journal : official journal of the North American Spine Society

BACKGROUND CONTEXT : The introduction and integration of robot technology into modern spine surgery provides surgeons with millimeter accuracy for pedicle screw placement. Coupled with computer-based navigation platforms, robot-assisted spine surgery utilizes augmented reality to potentially improve the safety profile of instrumentation.

PURPOSE : In this study, the authors seek to determine the safety and efficacy of robotic-assisted pedicle screw placement compared to conventional free-hand (FH) technique.

STUDY DESIGN/SETTING : We conducted a systematic review of the electronic databases using different MeSH terms from 1980 to 2020.

OUTCOME MEASURES : The present study measures pedicle screw accuracy, complication rates, proximal-facet joint violation, intra-operative radiation time, radiation dosage, and length of surgery.

RESULTS : A total of 1,525 patients (7,379 pedicle screws) from 19 studies with 777 patients (51.0% with 3,684 pedicle screws) in the robotic-assisted group were included. Perfect pedicle screw accuracy, as categorized by Gerztbein-Robbin Grade A, was significantly superior with robotic-assisted surgery compared to FH-technique (OR: 1.68, 95%CI:1.20-2.35; p=0.003). Similarly, clinically acceptable pedicle screw accuracy (Grade A+B) was significantly higher with robotic-assisted surgery versus FH-technique (OR: 1.54, 95%CI:1.01-2.37; p=0.05). Furthermore, the complication rates and proximal-facet joint violation were 69% (OR: 0.31, 95%CI:0.20-0.48; p<0.00001) and 92% less likely (OR: 0.08, 95%CI:0.03-0.20; p<0.00001) with robotic-assisted surgery versus FH-group. Robotic-assisted pedicle screw implantation significantly reduced intra-operative radiation time (MD: -5.30,95%CI:-6.83-3.76; p<0.00001) and radiation dosage (MD: -3.70, 95%CI:-4.80-2.60; p<0.00001) compared to the conventional FH-group. However, the length of surgery was significantly higher with robotic-assisted surgery (MD: 22.70, 95%CI:6.57-38.83; p=0.006) compared to the FH-group.

CONCLUSION : This meta-analysis corroborates the accuracy of robot-assisted pedicle screw placement.

Fatima Nida, Massaad Elie, Hadzipasic Muhamed, Shankar Ganesh M, Shin John H

2020-Sep-22

artificial intelligence, augmented reality, efficacy, pedicle screw, robotics, safety, spine fusion

General General

The use of drones and a machine-learning model for recognition of simulated drowning victims-A feasibility study.

In Resuscitation ; h5-index 66.0

BACKGROUND : Submersion time is a strong predictor for death in drowning, already 10 minutes after submersion, survival is poor. Traditional search efforts are time-consuming and demand a large number of rescuers and resources. We aim to investigate the feasibility and effectiveness of using drones combined with an online machine learning (ML) model for automated recognition of simulated drowning victims.

METHODS : This feasibility study used photos taken by a drone hovering at 40 m altitude over an estimated 3000 m2 surf area with individuals simulating drowning. Photos from 2 ocean beaches in the south of Sweden were used to a) train an online ML model b) test the model for recognition of a drowning victim.

RESULTS : The model was tested for recognition on n = 100 photos with one victim and n = 100 photos with no victims. In drone photos containing one victim (n = 100) the ML model sensitivity for drowning victim recognition was 91% (95%CI 84.9%- 96.2%) with a median probability score that the finding was human of 66% (IQR 52-71). In photos with no victim (n = 100) the ML model specificity was 90% (95%CI: 83.9%- 95.6%). False positives were present in 17.5% of all n = 200 photos but could all be ruled out manually as false objects.

CONCLUSIONS : The use of a drone and a ML model was feasible and showed satisfying effectiveness in identifying a submerged static human simulating drowning in open water and favorable environmental conditions. The ML algorithm and methodology should be further optimized, again tested and validated in a real-life clinical study.

Claesson A, Schierbeck S, Hollenberg J, Forsberg S, Nordberg P, Ringh M, Olausson M, Jansson A, Nord A

2020-Sep-22

Drone, Drowning, Machine-learning, OHCA

General General

Next-generation metabolic engineering approaches towards development of plant cell suspension cultures as specialized metabolite producing biofactories.

In Biotechnology advances

Plant cell suspension culture (PCSC) has emerged as a viable technology to produce plant specialized metabolites (PSM). While Taxol® and ginsenoside are two examples of successfully commercialized PCSC-derived PSM, widespread utilization of the PCSC platform has yet to be realized primarily due to a lack of understanding of the molecular genetics of PSM biosynthesis. Recent advances in computational, molecular and synthetic biology tools provide the opportunity to rapidly characterize and harness the specialized metabolic potential of plants. Here, we discuss the prospects of integrating computational modeling, artificial intelligence, and precision genome editing (CRISPR/Cas and its variants) toolboxes to discover the genetic regulators of PSM. We also explore how synthetic biology can be applied to develop metabolically optimized PSM-producing native and heterologous PCSC systems. Taken together, this review provides an interdisciplinary approach to realize and link the potential of next-generation computational and molecular tools to convert PCSC into commercially viable PSM-producing biofactories.

Arya Sagar S, Rookes James, Cahill David, Lenka Sangram K

2020-Sep-22

Artificial gene cluster, Computational modeling, Metabolic engineering, Plant cell suspension culture, Plant gene clusters, Plant specialized metabolites

oncology Oncology

Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy.

In Radiotherapy and oncology : journal of the European Society for Therapeutic Radiology and Oncology

Background and Purpose To enable accurate magnetic resonance imaging (MRI)-based dose calculations, synthetic computed tomography (sCT) images need to be generated. We aim at assessing the feasibility of dose calculations from MRI acquired with a heterogeneous set of imaging protocol for paediatric patients affected by brain tumours. Materials and methods Sixty paediatric patients undergoing brain radiotherapy were included. MR imaging protocols varied among patients, and data heterogeneity was maintained in train/validation/test sets. Three 2D conditional generative adversarial networks (cGANs) were trained to generate sCT from T1-weighted MRI, considering the three orthogonal planes and its combination (multi-plane sCT). For each patient, median and standard deviation (σ) of the three views were calculated, obtaining a combined sCT and a proxy for uncertainty map, respectively. The sCTs were evaluated against the planning CT in terms of image similarity and accuracy for photon and proton dose calculations.Results A mean absolute error of 61±14 HU (mean±1σ) was obtained in the intersection of the body contours between CT and sCT. The combined multi-plane sCTs performed better than sCTs from any single plane. Uncertainty maps highlighted that multi-plane sCTs differed at the body contours and air cavities. A dose difference of -0.1±0.3% and 0.1±0.4% was obtained on the D>90% of the prescribed dose and mean γ2%,2mm pass-rate of 99.5±0.8% and 99.2±1.1% for photon and proton planning, respectively. Conclusion Accurate MR-based dose calculation using a combination of three orthogonal planes for sCT generation is feasible for paediatric brain cancer patients, even when training on a heterogeneous dataset.

Maspero Matteo, Bentvelzen Laura G, Savenije Mark Hf, Guerreiro Filipa, Seravalli Enrica, Janssens Geert O, van den Berg Cornelis At, Philippens Marielle Ep

2020-Sep-22

Artificial intelligence, Brain tumors, Deep learning, Image-to-image translation, Machine learning., Pediatric oncology, Radiotherapy, Synthetic CT

General General

Real-Time Assembly of Coordination Patterns in Human Infants.

In Current biology : CB

Flexibility and generativity are fundamental aspects of functional behavior that begin in infancy and improve with experience. How do infants learn to tailor their real-time solutions to variations in local conditions? On a nativist view, the developmental process begins with innate prescribed solutions, and experience elaborates on those solutions to suit variations in the body and the environment. On an emergentist view, infants begin by generating a variety of strategies indiscriminately, and experience teaches them to select solutions tailored to the current relations between their body and the environment. To disentangle these accounts, we observed coordination patterns in 11-month-old pre-walking infants with a range of cruising (moving sideways in an upright posture while holding onto a support) and crawling experience as they cruised over variable distances between two handrails they held for support. We identified infants' coordination patterns using a novel combination of computer-vision, machine-learning, and time-series analyses. As predicted by the emergentist view, the least experienced infants generated multiple coordination patterns inconsistently regardless of body size and handrail distance, whereas the most experienced infants tailored their coordination patterns to body-environment relations and switched solutions only when necessary. Moreover, the beneficial effects of experience were specific to cruising and not crawling, although both skills involve anti-phase coordination among the four limbs. Thus, findings support an emergentist view and suggest that everyday experience with the target skill may promote "learning to learn," where infants learn to assemble the appropriate solution for new problems on the fly.

Ossmy Ori, Adolph Karen E

2020-Sep-22

artificial intelligence, behavioral flexibility, computer vision, cruising, infants, limb coordination, locomotion, machine learning, motor development, problem solving

General General

Targeting thermoTRP ion channels: in silico preclinical approaches and opportunities.

In Expert opinion on therapeutic targets

INTRODUCTION : A myriad of cellular pathophysiological responses are mediated by polymodal ion channels that respond to chemical and physical stimuli such as thermoTRP channels. Intriguingly, these channels are pivotal therapeutic targets with limited clinical pharmacology. In silico methods offer an unprecedented opportunity for discovering new lead compounds targeting thermoTRP channels with improved pharmacological activity and therapeutic index.

AREAS COVERED : This article reviews the progress on thermoTRP channel pharmacology because of (i) advances in solving their atomic structure using cryo-electron microscopy and, (ii) progress on computational techniques including homology modeling, molecular docking, virtual screening, molecular dynamics, ADME/Tox and artificial intelligence. Together, they have increased the number of lead compounds with clinical potential to treat a variety of pathologies. We used original and review articles from Pubmed (1997-2020), as well as the clinicaltrials.gov database, containing the terms thermoTRP, artificial intelligence, docking, and molecular dynamics.

EXPERT OPINION : The atomic structure of thermoTRP channels along with computational methods constitute a realistic first line strategy for designing drug candidates with improved pharmacology and clinical translation. In silico approaches can also help predict potential side-effects that can limit clinical development of drug candidates. Together, they should provide drug candidates with upgraded therapeutic properties.

Fernández-Ballester Gregorio, Fernández-Carvajal Asia, Ferrer-Montiel Antonio

2020-Sep-24

ADME, artificial intelligence, docking, ion channel, molecular dynamics, thermoTRP channels, virtual screening

General General

Next-Generation Analytics for Omics Data.

In Cancer cell ; h5-index 124.0

The increasing omics data present a daunting informatics challenge. DrBioRight, a natural language-oriented and artificial intelligence-driven analytics platform, enables the broad research community to perform analysis in an intuitive, efficient, transparent, and collaborative way. The emerging next-generation analytics will maximize the utility of omics data and lead to a new paradigm for biomedical research.

Li Jun, Chen Hu, Wang Yumeng, Chen Mei-Ju May, Liang Han

2020-Sep-23

General General

Artificial Intelligence-based detection of lymph node metastases by PET/CT predicts prostate cancer-specific survival.

In Clinical physiology and functional imaging

INTRODUCTION : Lymph node metastases are a key prognostic factor in prostate cancer (PCa), but detecting lymph node lesions from PET/CT images is a subjective process resulting in inter-reader variability. Artificial intelligence (AI)-based methods can provide an objective image analysis. We aimed at developing and validating an AI-based tool for detection of lymph node lesions.

METHODS : A group of 399 patients with biopsy-proven PCa who had undergone 18 F-choline PET/CT for staging prior to treatment were used to train (n=319) and test (n=80) the AI-based tool. The tool consisted of convolutional neural networks using complete PET/CT scans as inputs. In the test set, the AI-based lymph node detections were compared to those of two independent readers. The association with PCa-specific survival was investigated.

RESULTS : The AI-based tool detected more lymph node lesions than Reader B (98 vs 87/117; p=0.045) using Reader A as reference. AI-based tool and Reader A showed similar performance (90 vs 87/111; p=0.63) using Reader B as reference. The number of lymph node lesions detected by the AI-based tool, PSA, and curative treatment were significantly associated with PCa-specific survival.

CONCLUSION : This study shows the feasibility of using an AI-based tool for automated and objective interpretation of PET/CT images that can provide assessments of lymph node lesions comparable with that of experienced readers, and prognostic information in PCa patients.

Borrelli Pablo, Larsson Måns, Ulén Johannes, Enqvist Olof, Trägårdh Elin, Hvid Poulsen Mads, Mortensen Mike Allan, Kjölhede Henrik, Høilund-Carlsen Poul Flemming, Edenbrandt Lars

2020-Sep-25

Artificial intelligence, Fluorocholine, Lymph node metastases, PCa, PET

General General

Spage2vec: Unsupervised representation of localized spatial gene expression signatures.

In The FEBS journal

Investigations of spatial cellular composition of tissue architectures revealed by multiplexed in situ RNA detection often rely on inaccurate cell segmentation or prior biological knowledge from complementary single cell sequencing experiments. Here we present spage2vec, an unsupervised segmentation free approach for decrypting the spatial transcriptomic heterogeneity of complex tissues at subcellular resolution. Spage2vec represents the spatial transcriptomic landscape of tissue samples as a graph and leverages a powerful machine learning graph representation technique to create a lower dimensional representation of local spatial gene expression. We apply spage2vec to mouse brain data from three different in situ transcriptomic assays and to a spatial gene expression dataset consisting of hundreds of individual cells. We show that learned representations encode meaningful biological spatial information of re-occuring localized gene expression signatures involved in cellular and subcellular processes.

Partel Gabriele, Wählby Carolina

2020-Sep-25

RNA profiling, Spatial transcriptomics, gene expression, graph representation learning, tissue analysis

General General

Neighbourhoods in the Yeast Regulatory Network in Different Physiological States.

In Bioinformatics (Oxford, England)

MOTIVATION : The gene expression regulatory network in yeast controls the selective implementation of the information contained in the genome sequence. We seek to understand how, in different physiological states, the network reconfigures itself to produce a different proteome.

RESULTS : This article analyses this reconfiguration, focusing on changes in the local structure of the network. In particular, we define, extract and compare the 1-neighbourhoods of each transcription factor, where a 1-neighbourhood of a node in a network is the minimal subgraph of the network containing all nodes connected to the central node by an edge. We report the similarities and differences in the topologies and connectivities of these neighbourhoods in five physiological states for which data are available: cell cycle, DNA damage, stress response, diauxic shift, and sporulation. Based on our analysis, it seems apt to regard the components of the regulatory network as 'software', and the responses to changes in state, 'reprogramming'.

Lesk Arthur M, Konagurthu Arun S

2020-Sep-25

General General

BERTMeSH: Deep Contextual Representation Learning for Large-scale High-performance MeSH Indexing with Full Text.

In Bioinformatics (Oxford, England)

MOTIVATION : With the rapid increase of biomedical articles, large-scale automatic Medical Subject Headings (MeSH) indexing has become increasingly important. FullMeSH, the only method for large-scale MeSH indexing with full text, suffers from three major drawbacks: FullMeSH 1) uses Learning To Rank (LTR), which is time-consuming, 2) can capture some pre-defined sections only in full text, and 3) ignores the whole MEDLINE database.

RESULTS : We propose a computationally lighter, full-text and deep learning based MeSH indexing method, BERTMeSH, which is flexible for section organization in full text. BERTMeSH has two technologies: 1) the state-of-the-art pre-trained deep contextual representation, BERT (Bidirectional Encoder Representations from Transformers), which makes BERTMeSH capture deep semantics of full text. 2) a transfer learning strategy for using both full text in PubMed Central (PMC) and title and abstract (only and no full text) in MEDLINE, to take advantages of both. In our experiments, BERTMeSH was pre-trained with 3 million MEDLINE citations and trained on approximately 1.5 million full text in PMC. BERTMeSH outperformed various cutting edge baselines. For example, for 20K test articles of PMC, BERTMeSH achieved a Micro F-measure of 69.2%, which was 6.3% higher than FullMeSH with the difference being statistically significant. Also prediction of 20K test articles needed 5 minutes by BERTMeSH, while it took more than 10 hours by FullMeSH, proving the computational efficiency of BERTMeSH.

SUPPLEMENTARY INFORMATION : Supplementary data are available at Bioinformatics online.

You Ronghui, Liu Yuxuan, Mamitsuka Hiroshi, Zhu Shanfeng

2020-Sep-25

General General

Programmable cross-ribosome-binding sites to fine-tune the dynamic range of transcription factor-based biosensor.

In Nucleic acids research ; h5-index 217.0

Currently, predictive translation tuning of regulatory elements to the desired output of transcription factor (TF)-based biosensors remains a challenge. The gene expression of a biosensor system must exhibit appropriate translation intensity, which is controlled by the ribosome-binding site (RBS), to achieve fine-tuning of its dynamic range (i.e. fold change in gene expression between the presence and absence of inducer) by adjusting the translation level of the TF and reporter. However, existing TF-based biosensors generally suffer from unpredictable dynamic range. Here, we elucidated the connections and partial mechanisms between RBS, translation level, protein folding and dynamic range, and presented a design platform that predictably tuned the dynamic range of biosensors based on deep learning of large datasets cross-RBSs (cRBSs). In doing so, a library containing 7053 designed cRBSs was divided into five sub-libraries through fluorescence-activated cell sorting to establish a classification model based on convolutional neural network in deep learning. Finally, the present work exhibited a powerful platform to enable predictable translation tuning of RBS to the dynamic range of biosensors.

Ding Nana, Yuan Zhenqi, Zhang Xiaojuan, Chen Jing, Zhou Shenghu, Deng Yu

2020-Sep-25

General General

PBSIM2: a simulator for long read sequencers with a novel generative model of quality scores.

In Bioinformatics (Oxford, England)

MOTIVATION : Recent advances in high-throughput long-read sequencers, such as PacBio and Oxford Nanopore sequencers, produce longer reads with more errors than short-read sequencers. In addition to the high error rates of reads, non-uniformity of errors leads to difficulties in various downstream analyses using long reads. Many useful simulators, which characterize long read error patterns and simulate them, have been developed. However, there is still room for improvement in the simulation of the non-uniformity of errors.

RESULTS : To capture characteristics of errors in reads for long read sequencers, here, we introduce a generative model for quality scores, in which a hidden Markov Model with a latest model selection method, called Factorized information criteria, is utilized. We evaluated our developed simulator from various points, indicating that our simulator successfully simulates reads that are consistent with real reads.

AVAILABILITY : The source codes of PBSIM2 are freely available from https://github.com/yukiteruono/pbsim2.

SUPPLEMENTARY INFORMATION : Supplementary data are available at Bioinformatics online.

Ono Yukiteru, Asai Kiyoshi, Hamada Michiaki

2020-Sep-25

General General

The path to international medals: A supervised machine learning approach to explore the impact of coach-led sport-specific and non-specific practice.

In PloS one ; h5-index 176.0

Research investigating the nature and scope of developmental participation patterns leading to international senior-level success is mainly explorative up to date. One of the criticisms of earlier research was its typical multiple testing for many individual participation variables using bivariate, linear analyses. Here, we applied state-of-the-art supervised machine learning to investigate potential non-linear and multivariate effects of coach-led practice in the athlete's respective main sport and in other sports on the achievement of international medals. Participants were matched pairs (sport, sex, age) of adult international medallists and non-medallists (n = 166). Comparison of several non-ensemble and tree-based ensemble binary classification algorithms identified "eXtreme gradient boosting" as the best-performing algorithm for our classification problem. The model showed fair discrimination power between the international medallists and non-medallists. The results indicate that coach-led other-sports practice until age 14 years was the most important feature. Furthermore, both main-sport and other-sports practice were non-linearly related to international success. The amount of main-sport practice displayed a parabolic pattern while the amount of other-sports practice displayed a saturation pattern. The findings question excess involvement in specialised coach-led main-sport practice at an early age and call for childhood/adolescent engagement in coach-led practice in various sports. In data analyses, combining traditional statistics with advanced supervised machine learning may improve both testing of the robustness of findings and new discovery of patterns among multivariate relationships of variables, and thereby of new hypotheses.

Barth Michael, Güllich Arne, Raschner Christian, Emrich Eike

2020

General General

Analyzing the effects of free water modeling by deep learning on diffusion MRI structural connectivity estimates in glioma patients.

In PloS one ; h5-index 176.0

Diffusion-weighted MRI makes it possible to quantify subvoxel brain microstructure and to reconstruct white matter fiber trajectories with which structural connectomes can be created. However, at the border between cerebrospinal fluid and white matter, or in the presence of edema, the obtained MRI signal originates from both the cerebrospinal fluid as well as from the white matter partial volume. Diffusion tractography can be strongly influenced by these free water partial volume effects. Thus, including a free water model can improve diffusion tractography in glioma patients. Here, we analyze how including a free water model influences structural connectivity estimates in healthy subjects as well as in brain tumor patients. During a clinical study, we acquired diffusion MRI data of 35 glioma patients and 28 age- and sex-matched controls, on which we applied an open-source deep learning based free water model. We performed deterministic as well as probabilistic tractography before and after free water modeling, and utilized the tractograms to create structural connectomes. Finally, we performed a quantitative analysis of the connectivity matrices. In our experiments, the number of tracked diffusion streamlines increased by 13% for high grade glioma patients, 9.25% for low grade glioma, and 7.65% for healthy controls. Intra-subject similarity of hemispheres increased significantly for the patient as well as for the control group, with larger effects observed in the patient group. Furthermore, inter-subject differences in connectivity between brain tumor patients and healthy subjects were reduced when including free water modeling. Our results indicate that free water modeling increases the similarity of connectivity matrices in brain tumor patients, while the observed effects are less pronounced in healthy subjects. As the similarity between brain tumor patients and healthy controls also increased, connectivity changes in brain tumor patients may have been overestimated in studies that did not perform free water modeling.

Weninger Leon, Na Chuh-Hyoun, Jütten Kerstin, Merhof Dorit

2020

General General

Public discourse and sentiment during the COVID 19 pandemic: Using Latent Dirichlet Allocation for topic modeling on Twitter.

In PloS one ; h5-index 176.0

The study aims to understand Twitter users' discourse and psychological reactions to COVID-19. We use machine learning techniques to analyze about 1.9 million Tweets (written in English) related to coronavirus collected from January 23 to March 7, 2020. A total of salient 11 topics are identified and then categorized into ten themes, including "updates about confirmed cases," "COVID-19 related death," "cases outside China (worldwide)," "COVID-19 outbreak in South Korea," "early signs of the outbreak in New York," "Diamond Princess cruise," "economic impact," "Preventive measures," "authorities," and "supply chain." Results do not reveal treatments and symptoms related messages as prevalent topics on Twitter. Sentiment analysis shows that fear for the unknown nature of the coronavirus is dominant in all topics. Implications and limitations of the study are also discussed.

Xue Jia, Chen Junxiang, Chen Chen, Zheng Chengda, Li Sijia, Zhu Tingshao

2020

Radiology Radiology

Large-Scale Functional Brain Network Architecture Changes Associated With Trauma-Related Dissociation.

In The American journal of psychiatry

OBJECTIVE : Dissociative experiences commonly occur in response to trauma, and while their presence strongly affects treatment approaches in posttraumatic spectrum disorders, their etiology remains poorly understood and their phenomenology incompletely characterized. Methods to reliably assess the severity of dissociation symptoms, without relying solely on self-report, would have tremendous clinical utility. Brain-based measures have the potential to augment symptom reports, although it remains unclear whether brain-based measures of dissociation are sufficiently sensitive and robust to enable individual-level estimation of dissociation severity based on brain function. The authors sought to test the robustness and sensitivity of a brain-based measure of dissociation severity.

METHODS : An intrinsic network connectivity analysis was applied to functional MRI scans obtained from 65 women with histories of childhood abuse and current posttraumatic stress disorder (PTSD). The authors tested for continuous measures of trauma-related dissociation using the Multidimensional Inventory of Dissociation. Connectivity estimates were derived with a novel machine learning technique using individually defined homologous functional regions for each participant.

RESULTS : The models achieved moderate ability to estimate dissociation, after controlling for childhood trauma and PTSD severity. Connections that contributed the most to the estimation mainly involved the default mode and frontoparietal control networks. By contrast, all models performed at chance levels when using a conventional group-based network parcellation.

CONCLUSIONS : Trauma-related dissociative symptoms, distinct from PTSD and childhood trauma, can be estimated on the basis of network connectivity. Furthermore, between-network brain connectivity may provide an unbiased estimate of symptom severity, paving the way for more objective, clinically useful biomarkers of dissociation and advancing our understanding of its neural mechanisms.

Lebois Lauren A M, Li Meiling, Baker Justin T, Wolff Jonathan D, Wang Danhong, Lambros Ashley M, Grinspoon Elizabeth, Winternitz Sherry, Ren Jianxun, Gönenç Atilla, Gruber Staci A, Ressler Kerry J, Liu Hesheng, Kaufman Milissa L

2020-Sep-25

Radiology Radiology

Sociodemographic data and APOE-ε4 augmentation for MRI-based detection of amnestic mild cognitive impairment using deep learning systems.

In PloS one ; h5-index 176.0

Detection and diagnosis of early and subclinical stages of Alzheimer's Disease (AD) play an essential role in the implementation of intervention and prevention strategies. Neuroimaging techniques predominantly provide insight into anatomic structure changes associated with AD. Deep learning methods have been extensively applied towards creating and evaluating models capable of differentiating between cognitively unimpaired, patients with Mild Cognitive Impairment (MCI) and AD dementia. Several published approaches apply information fusion techniques, providing ways of combining several input sources in the medical domain, which contributes to knowledge of broader and enriched quality. The aim of this paper is to fuse sociodemographic data such as age, marital status, education and gender, and genetic data (presence of an apolipoprotein E (APOE)-ε4 allele) with Magnetic Resonance Imaging (MRI) scans. This enables enriched multi-modal features, that adequately represent the MRI scan visually and is adopted for creating and modeling classification systems capable of detecting amnestic MCI (aMCI). To fully utilize the potential of deep convolutional neural networks, two extra color layers denoting contrast intensified and blurred image adaptations are virtually augmented to each MRI scan, completing the Red-Green-Blue (RGB) color channels. Deep convolutional activation features (DeCAF) are extracted from the average pooling layer of the deep learning system Inception_v3. These features from the fused MRI scans are used as visual representation for the Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN) classification model. The proposed approach is evaluated on a sub-study containing 120 participants (aMCI = 61 and cognitively unimpaired = 59) of the Heinz Nixdorf Recall (HNR) Study with a baseline model accuracy of 76%. Further evaluation was conducted on the ADNI Phase 1 dataset with 624 participants (aMCI = 397 and cognitively unimpaired = 227) with a baseline model accuracy of 66.27%. Experimental results show that the proposed approach achieves 90% accuracy and 0.90 F1-Score at classification of aMCI vs. cognitively unimpaired participants on the HNR Study dataset, and 77% accuracy and 0.83 F1-Score on the ADNI dataset.

Pelka Obioma, Friedrich Christoph M, Nensa Felix, Mönninghoff Christoph, Bloch Louise, Jöckel Karl-Heinz, Schramm Sara, Sanchez Hoffmann Sarah, Winkler Angela, Weimar Christian, Jokisch Martha

2020

General General

A greedy classifier optimization strategy to assess ion channel blocking activity and pro-arrhythmia in hiPSC-cardiomyocytes.

In PLoS computational biology

Novel studies conducting cardiac safety assessment using human-induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) are promising but might be limited by their specificity and predictivity. It is often challenging to correctly classify ion channel blockers or to sufficiently predict the risk for Torsade de Pointes (TdP). In this study, we developed a method combining in vitro and in silico experiments to improve machine learning approaches in delivering fast and reliable prediction of drug-induced ion-channel blockade and proarrhythmic behaviour. The algorithm is based on the construction of a dictionary and a greedy optimization, leading to the definition of optimal classifiers. Finally, we present a numerical tool that can accurately predict compound-induced pro-arrhythmic risk and involvement of sodium, calcium and potassium channels, based on hiPSC-CM field potential data.

Raphel Fabien, De Korte Tessa, Lombardi Damiano, Braam Stefan, Gerbeau Jean-Frederic

2020-Sep-25

General General

Convolutional neural network applied for nanoparticle classification using coherent scatterometry data.

In Applied optics

The analysis of 2D scattering maps generated in scatterometry experiments for detection and classification of nanoparticles on surfaces is a cumbersome and slow process. Recently, deep learning techniques have been adopted to avoid manual feature extraction and classification in many research and application areas, including optics. In the present work, we collected experimental datasets of nanoparticles deposited on wafers for four different classes of polystyrene particles (with diameters of 40, 50, 60, and 80 nm) plus a background (no particles) class. We trained a convolutional neural network, including its architecture optimization, and achieved 95% accurate results. We compared the performance of this network to an existing method based on line-by-line search and thresholding, demonstrating up to a twofold enhanced performance in particle classification. The network is extended by a supervisor layer that can reject up to 80% of the fooling images at the cost of rejecting only 10% of original data. The developed Python and PyTorch codes, as well as dataset, are available online.

Kolenov D, Davidse D, Le Cam J, Pereira S F

2020-Sep-20

Radiology Radiology

The Evaluation of Radiomic Models in Distinguishing Pilocytic Astrocytoma From Cystic Oligodendroglioma With Multiparametric MRI.

In Journal of computer assisted tomography

PURPOSE : To assess whether a machine-learning model based on texture features extracted from multiparametric magnetic resonance imaging could yield an accurate diagnosis in differentiating pilocytic astrocytoma from cystic oligodendrogliomas.

MATERIALS AND METHODS : The preoperative images from multisequences were used for tumor segmentation. Radiomic features were extracted and selected for machine-learning models. Semantic features and selected radiomic features from training data set were built, and the performance of each model was evaluated by receiver operating characteristic curve and accuracy from isolated testing data set.

RESULTS : In terms of different sequences, the best classifier was built by radiomic features extracted from enhanced T1WI-based classifier. The best model in our study turned out to be the gradient boosted trees classifier with an area under curve value of 0.99.

CONCLUSION : Our study showed that gradient boosted trees based on texture features extracted from enhanced T1WI could become an additional tool for improving diagnostic accuracy to differentiate pilocytic astrocytoma from cystic oligodendroglioma.

Zhao Yajing, Lu Yiping, Li Xuanxuan, Zheng Yingyan, Yin Bo

2020-Sep-23

General General

Truncation compensation and metallic dental implant artefact reduction in PET/MRI attenuation correction using deep learning-based object completion.

In Physics in medicine and biology

The susceptibility of MRI to metallic objects leads to void MR signal and missing information around metallic implants. In addition, body truncation occurs in MR imaging for large patients who exceed the transaxial field-of-view of the scanner. Body truncation and metal artefacts translate to incomplete MRI-derived attenuation correction (AC) maps, consequently resulting in large quantification errors in PET imaging. In this work, we propose a deep learning-based approach to predict the missing information/regions in MR images affected by metallic artefacts and/or body truncation aiming at reducing quantification errors in PET/MRI. Twenty-five whole-body (WB) co-registered PET, CT, and MR images were used for training and evaluation of the object completion approach. CT-based attenuation corrected PET images were considered as reference for the quantitative evaluation of the proposed approach. Its performance was compared to the 3-class segmentation-based AC approach (containing background air, soft-tissue and lung) obtained from MR images. The metal-induced artefacts affected 8.1 ± 1.8% of the volume of the head region when using the 3-class AC maps. This error reduced to 0.9 ± 0.5% after application of object completion on MR images. Consequently, quantification errors in PET images reduced from -57.5 ± 11% to -18.5 ± 5% in the head region after metal artefact correction. The percentage of the torso volume affected by body truncation in the 3-class AC maps reduced from 9.8 ± 1.9% to 0.6 ± 0.3% after truncation compensation. PET quantification errors in the affected regions were also reduced from -45.5 ± 10% to -9.5 ± 3% after truncation compensation. The quantitative results demonstrated promising performance of the proposed approach towards the completion of MR images corrupted by metal artefacts and/or body truncation in the context of WB PET/MR imaging.

Arabi Hossein, Zaidi Habib

2020-Sep-25

General General

predCOVID-19: A Systematic Study of Clinical Predictive Models for Coronavirus Disease 2019.

In Journal of medical Internet research ; h5-index 88.0

BACKGROUND : Coronavirus Disease 2019 (COVID-19) is a rapidly emerging respiratory disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Due to the rapid human-to-human transmission of SARS-CoV-2, many healthcare systems are at risk of exceeding their healthcare capacities, in particular in terms of SARS-CoV-2 tests, hospital and intensive care unit (ICU) beds and mechanical ventilators. Predictive algorithms could potentially ease the strain on healthcare systems by identifying those who are most likely to receive a positive SARS-CoV-2 test, be hospitalised or admitted to the ICU.

OBJECTIVE : To develop, study and evaluate clinical predictive models that estimate, using machine learning and based on routinely collected clinical data, which patients are likely to receive a positive SARS-CoV-2 test, require hospitalisation or intensive care.

METHODS : Using a systematic approach to model development and optimisation, we train and compare various types of machine learning models, including logistic regression, neural networks, support vector machines, random forests, and gradient boosting. To evaluate the developed models, we perform a retrospective evaluation on demographic, clinical and blood analysis data from a cohort of 5644 patients. In addition, we determine which clinical features are predictive to what degree for each of the aforementioned clinical tasks using causal explanations.

RESULTS : Our experimental results indicate that our predictive models identify (i) patients that test positive for SARS-CoV-2 a priori at a sensitivity of 75% (95% confidence interval [CI]: 67%, 81%) and a specificity of 49% (95% CI: 46%, 51%), (ii) SARS-CoV-2 positive patients that require hospitalisation with 0.92 area under the receiver operator characteristic curve [AUC] (95% CI: 0.81, 0.98), and (iii) SARS-CoV-2 positive patients that require critical care with 0.98 AUC (95% CI: 0.95, 1.00).

CONCLUSIONS : Our results indicate that predictive models trained on routinely collected clinical data could be used to predict clinical pathways for COVID-19, and therefore help inform care and prioritise resources.

CLINICALTRIAL :

Schwab Patrick, Schütte DuMont August, Dietz Benedikt, Bauer Stefan

2020-Sep-14

General General

Learning useful representations of DNA sequences from ChIP-seq datasets for exploring transcription factor binding specificities.

In IEEE/ACM transactions on computational biology and bioinformatics

Deep learning has been successfully applied to surprisingly different domains. Researchers and practitioners are employing trained deep learning models to enrich our knowledge. Transcription factors (TFs) are essential for regulating gene expression in all organisms by binding to specific DNA sequences. Here, we designed a deep learning model named SemanticCS (Semantic ChIP-seq) to predict TF binding specificities. We trained our learning model on an ensemble of ChIP-seq datasets (Multi-TF-cell) to learn useful intermediate features across multiple TFs and cells. To interpret these feature vectors, visualization analysis was used. Our results indicate that these learned representations can be used to train shallow machines for other tasks. Using diverse experimental data and evaluation metrics, we show that SemanticCS outperforms other popular methods. In addition, from experimental data, SemanticCS can help to identify the substitutions that cause regulatory abnormalities and to evaluate the effect of substitutions on the binding affinity for the RXR transcription factor. The online server for SemanticCS is freely available at http://qianglab.scst.suda.edu.cn/semanticCS/.

Quan Lijun, Sun Xiaoyu, Wu Jian, Mei Jie, Huang Liqun, He Ruji, Nie Liangpeng, Chen Yu, Lyu Qiang

2020-Sep-25

General General

Online Alternate Generator against Adversarial Attacks.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

The field of computer vision has witnessed phenomenal progress in recent years partially due to the development of deep convolutional neural networks. However, deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images. Some existing defense methods require to re-train attacked target networks and augment the train set via known adversarial attacks, which is inefficient and might be unpromising with unknown attack types. To overcome the above issues, we propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks. The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises. To avoid pretrained parameters exploited by attackers, we alternately update the generator and the synthesized image at the inference stage. Experimental results demonstrate that the proposed defensive scheme and method outperforms a series of state-of-the-art defending models against gray-box adversarial attacks.

Li Haofeng, Zeng Yirui, Li Guanbin, Lin Liang, Yu Yizhou

2020-Sep-25

Surgery Surgery

Tongue tumor detection in hyperspectral images using deep learning semantic segmentation.

In IEEE transactions on bio-medical engineering

OBJECTIVE : The utilization of hyperspectral imaging (HSI) in real-time tumor segmentation during a surgery have recently received much attention, but it remains a very challenging task.

METHODS : In this work, we propose semantic segmentation methods and compare them with other relevant deep learning algorithms for tongue tumor segmentation. To the best of our knowledge, this is the first work using deep learning semantic segmentation for tumor detection in HSI data using channel selection and accounting for more spatial tissue context and global comparison between the prediction map and the annotation per sample.

RESULTS AND CONCLUSION : On a clinical data set with tongue squamous cell carcinoma, our best method obtains very strong results of average dice coefficient and area under the ROC-curve of 0.891 +/- 0.053 and 0.924 +/- 0.036, respectively on the original spatial image size. The results show that a very good performance can be achieved even with a limited amount of data. We demonstrate that important information regarding tumor decision is encoded in various channels, but some channel selection and filtering is beneficial over the full spectra. Moreover, we use both visual (VIS) and near-infrared (NIR) spectrum, rather than commonly used only VIS spectrum; although VIS spectrum is generally of higher significance, we demonstrate NIR spectrum is crucial for tumor capturing in some cases.

SIGNIFICANCE : The HSI technology augmented with accurate deep learning algorithms has a huge potential to be a promising alternative to digital pathology or a doctors' supportive tool in real-time surgeries.

Trajanovski Stojan, Shan Caifeng, Weijtmans Pim J C, Brouwer de Koning Susan G, Ruers Theo J M

2020-Sep-25

Radiology Radiology

One Algorithm May Not Fit All: How Selection Bias Affects Machine Learning Performance.

In Radiographics : a review publication of the Radiological Society of North America, Inc

Machine learning (ML) algorithms have demonstrated high diagnostic accuracy in identifying and categorizing disease on radiologic images. Despite the results of initial research studies that report ML algorithm diagnostic accuracy similar to or exceeding that of radiologists, the results are less impressive when the algorithms are installed at new hospitals and are presented with new images. This phenomenon is potentially the result of selection bias in the data that were used to develop the ML algorithm. Selection bias has long been described by clinical epidemiologists as a key consideration when designing a clinical research study, but this concept has largely been unaddressed in the medical imaging ML literature. The authors discuss the importance of selection bias and its relevance to ML algorithm development to prepare the radiologist to critically evaluate ML literature for potential selection bias and understand how it might affect the applicability of ML algorithms in real clinical environments. ©RSNA, 2020.

Yu Alice C, Eng John

2020-Sep-25

Public Health Public Health

An Introduction to Probabilistic Record Linkage with a Focus on Linkage Processing for WTC Registries.

In International journal of environmental research and public health ; h5-index 73.0

Since its post-World War II inception, the science of record linkage has grown exponentially and is used across industrial, governmental, and academic agencies. The academic fields that rely on record linkage are diverse, ranging from history to public health to demography. In this paper, we introduce the different types of data linkage and give a historical context to their development. We then introduce the three types of underlying models for probabilistic record linkage: Fellegi-Sunter-based methods, machine learning methods, and Bayesian methods. Practical considerations, such as data standardization and privacy concerns, are then discussed. Finally, recommendations are given for organizations developing or maintaining record linkage programs, with an emphasis on organizations measuring long-term complications of disasters, such as 9/11.

Asher Jana, Resnick Dean, Brite Jennifer, Brackbill Robert, Cone James

2020-Sep-22

9/11 health, data matching, disaster epidemiology, epidemiology, interagency cooperation, probabilistic record linkage, record linkage

General General

Public discourse and sentiment during the COVID 19 pandemic: Using Latent Dirichlet Allocation for topic modeling on Twitter.

In PloS one ; h5-index 176.0

The study aims to understand Twitter users' discourse and psychological reactions to COVID-19. We use machine learning techniques to analyze about 1.9 million Tweets (written in English) related to coronavirus collected from January 23 to March 7, 2020. A total of salient 11 topics are identified and then categorized into ten themes, including "updates about confirmed cases," "COVID-19 related death," "cases outside China (worldwide)," "COVID-19 outbreak in South Korea," "early signs of the outbreak in New York," "Diamond Princess cruise," "economic impact," "Preventive measures," "authorities," and "supply chain." Results do not reveal treatments and symptoms related messages as prevalent topics on Twitter. Sentiment analysis shows that fear for the unknown nature of the coronavirus is dominant in all topics. Implications and limitations of the study are also discussed.

Xue Jia, Chen Junxiang, Chen Chen, Zheng Chengda, Li Sijia, Zhu Tingshao

2020

General General

Continuous Positive Airway Pressure Therapy in Obstructive Sleep Apnea Patients with Erectile Dysfunction-A meta-analysis.

In The clinical respiratory journal

BACKGROUND : Erectile dysfunction (ED) with obstructive sleep apnea (OSA) is a relatively common issue for men. A number of clinical studies have demonstrated that continuous positive airway pressure (CPAP) therapy may effectively alleviate ED symptom from patients with OSA.

METHODS : PubMed, MEDLINE, EMBASE and Cochrane Library databases were utilized and searched for the relevant studies up to September 2, 2019. The International Index of Erectile Function 5 (IIEF-5) scoring system from the patients before and after receiving their CPAP therapy were collected according to the strict inclusion and exclusion criteria. REVMEN 5.3 software was applied for the meta-analysis.

RESULTS : A total of seven publications consisted of 206 ED patients with OSA were included in the study. ED patients with OSA received CPAP treatment were significantly improved based on the IIEF-5 scores [Weighted Mean Difference (WMD) = 1.14, 95% confidence interval (CI) = 0.89 - 1.38, z =9.09, p < 0.0001].Our research found that the high heterogeneity is mainly due to Zhang's research, because his patient's AHI is much higher than other patients. We removed Zhang's study, a moderate heterogeneity (I2 = 54%, P = 0.05) was found in this study. Our research found that the high heterogeneity is mainly due to Zhang's research, because his patient's AHI is much higher than other patients.

CONCLUSION : The results suggest that continuous positive airway pressure therapy improve erectile dysfunction patients with obstructive sleep apnea. However, further evidence is needed due to the insufficient number of included patients and high heterogeneity.

Yang Zhihao, Du Guodong, Ma Lei, Lv Yunhui, Zhao Yang, Yau Tung On

2020-Sep-25

Continuous positive airway pressure, Meta-analysis, erectile dysfunction, obstructive sleep apnea

General General

Reducing HbA1c in Type 2 Diabetes Using Digital Twin Technology-Enabled Precision Nutrition: A Retrospective Analysis.

In Diabetes therapy : research, treatment and education of diabetes and related disorders

INTRODUCTION : The objective of this study was to examine changes in hemoglobin A1c (HbA1c), anti-diabetic medication use, insulin resistance, and other ambulatory glucose profile metrics between baseline and after 90 days of participation in the Twin Precision Nutrition (TPN) Program enabled by Digital Twin Technology.

METHODS : This was a retrospective study of patients with type 2 diabetes who participated in the TPN Program and had at least 3 months of follow-up. The TPN machine learning algorithm used daily continuous glucose monitor (CGM) and food intake data to provide guidelines that would enable individual patients to avoid foods that cause blood glucose spikes and to replace them with foods that do not produce spikes. Physicians with access to daily CGM data titrated medications and monitored patient conditions.

RESULTS : Of the 89 patients who initially enrolled in the TPN Program, 64 patients remained in the program and adhered to it for at least 90 days; all analyses were performed on these 64 patients. At the 90-day follow-up assessment, mean (± standard deviation) HbA1c had decreased from 8.8 ± 2.2% at baseline by 1.9 to 6.9 ± 1.1%, mean weight had decreased from 79.0 ± 16.2 kg at baseline to 74.2 ± 14.7 kg, and mean fasting blood glucose had fallen from 151.2 ± 45.0 mg/dl at baseline to 129.1 ± 36.7 mg/dl. Homeostatic model assessment of insulin resistance (HOMA-IR) had decreased by 56.9% from 7.4 ± 3.5 to 3.2 ± 2.8. At the 90-day follow-up assessment, all 12 patients who were on insulin had stopped taking this medication; 38 of the 56 patients taking metformin had stopped metformin; 26 of the 28 patients on dipeptidyl peptidase-4 (DPP-4) inhibitors discontinued DPP-4 inhibitors; all 13 patients on alpha-glucosidase inhibitors discontinued these inhibitors; all 34 patients on sulfonylureas were able to stop taking these medications; two patients stopped taking pioglitazone; all ten patients on sodium-glucose cotransporter-2 (SGLT2) inhibitors stopped taking SGLT2 inhibitors; and one patient stopped taking glucagon-like peptide-1 analogues.

CONCLUSION : The results provide evidence that daily precision nutrition guidance based on CGM, food intake data, and machine learning algorithms can benefit patients with type 2 diabetes. Adherence for 3 months to the TPN Program resulted in patients achieving a 1.9 percentage point decrease in HbA1c, a 6.1% drop in weight, a 56.9% reduction in HOMA-IR, a significant decline in glucose time below range, and, in most patients, the elimination of diabetes medication use.

Shamanna Paramesh, Saboo Banshi, Damodharan Suresh, Mohammed Jahangir, Mohamed Maluk, Poon Terrence, Kleinman Nathan, Thajudeen Mohamed

2020-Sep-25

Artificial intelligence, Continuous glucose monitoring, Diabetes medication elimination, Digital twin technology, HbA1c reduction, Precision nutrition, Type 2 diabetes

General General

Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images.

In Medical & biological engineering & computing ; h5-index 32.0

Measurement of anatomical structures from ultrasound images requires the expertise of experienced clinicians. Moreover, there are artificial factors that make an automatic measurement complicated. In this paper, we aim to present a novel end-to-end deep learning network to automatically measure the fetal head circumference (HC), biparietal diameter (BPD), and occipitofrontal diameter (OFD) length from 2D ultrasound images. Fully convolutional neural networks (FCNNs) have shown significant improvement in natural image segmentation. Therefore, to overcome the potential difficulties in automated segmentation, we present a novelty FCNN and add a regression branch for predicting OFD and BPD in parallel. In the segmentation branch, a feature pyramid inside our network is built from low-level feature layers for a variety of fetal head in ultrasound images, which is different from traditional feature pyramid building methods. In order to select the most useful scale and reduce scale noise, attention mechanism is taken for the feature's filter. In the regression branch, for the accurate estimation of OFD and BPD length, a new region of interest (ROI) pooling layer is proposed to extract the elliptic feature map. We also evaluate the performance of our method on large dataset: HC18. Our experimental results show that our method can achieve better performance than the existing fetal head measurement methods. Graphical Abstract Deep Neural Network for Fetal Head Measurement.

Li Peixuan, Zhao Huaici, Liu Pengfei, Cao Feidao

2020-Sep-25

Feature pyramid, Fetal head measurement, Fully convolutional networks, ROI pooling, Ultrasound image segmentation

General General

Using Item Response Theory for Explainable Machine Learning in Predicting Mortality in the Intensive Care Unit: Case-Based Approach.

In Journal of medical Internet research ; h5-index 88.0

BACKGROUND : Supervised machine learning (ML) is being featured in the health care literature with study results frequently reported using metrics such as accuracy, sensitivity, specificity, recall, or F1 score. Although each metric provides a different perspective on the performance, they remain to be overall measures for the whole sample, discounting the uniqueness of each case or patient. Intuitively, we know that all cases are not equal, but the present evaluative approaches do not take case difficulty into account.

OBJECTIVE : A more case-based, comprehensive approach is warranted to assess supervised ML outcomes and forms the rationale for this study. This study aims to demonstrate how the item response theory (IRT) can be used to stratify the data based on how difficult each case is to classify, independent of the outcome measure of interest (eg, accuracy). This stratification allows the evaluation of ML classifiers to take the form of a distribution rather than a single scalar value.

METHODS : Two large, public intensive care unit data sets, Medical Information Mart for Intensive Care III and electronic intensive care unit, were used to showcase this method in predicting mortality. For each data set, a balanced sample (n=8078 and n=21,940, respectively) and an imbalanced sample (n=12,117 and n=32,910, respectively) were drawn. A 2-parameter logistic model was used to provide scores for each case. Several ML algorithms were used in the demonstration to classify cases based on their health-related features: logistic regression, linear discriminant analysis, K-nearest neighbors, decision tree, naive Bayes, and a neural network. Generalized linear mixed model analyses were used to assess the effects of case difficulty strata, ML algorithm, and the interaction between them in predicting accuracy.

RESULTS : The results showed significant effects (P<.001) for case difficulty strata, ML algorithm, and their interaction in predicting accuracy and illustrated that all classifiers performed better with easier-to-classify cases and that overall the neural network performed best. Significant interactions suggest that cases that fall in the most arduous strata should be handled by logistic regression, linear discriminant analysis, decision tree, or neural network but not by naive Bayes or K-nearest neighbors. Conventional metrics for ML classification have been reported for methodological comparison.

CONCLUSIONS : This demonstration shows that using the IRT is a viable method for understanding the data that are provided to ML algorithms, independent of outcome measures, and highlights how well classifiers differentiate cases of varying difficulty. This method explains which features are indicative of healthy states and why. It enables end users to tailor the classifier that is appropriate to the difficulty level of the patient for personalized medicine.

Kline Adrienne, Kline Theresa, Shakeri Hossein Abad Zahra, Lee Joon

2020-Sep-25

item response theory, machine learning, mortality, statistical model

General General

Predicting the Vulnerability of Women to Intimate Partner Violence in South Africa: Evidence from Tree-based Machine Learning Techniques.

In Journal of interpersonal violence

Intimate partner violence (IPV) is a pervasive social challenge with severe health and demographic consequences. Global statistics indicate that more than a third of women have experienced IPV at some point in their lives. In South Africa, IPV is considered a significant contributor to the country's broader problem with violence and a leading cause of femicide. Consequently, IPV has been the major focus of legislation and research across different disciplines. The present article aims to contribute to the growing scholarly literature by predicting factors that are associated with the risk of experiencing IPV. We used the 2016 South African Demographic and Health Survey dataset and restricted our analysis to 1,816 ever-married women who had complete information on the variables that were used to generate IPV. Prior research has mainly used regression analysis to identify correlates of IPV; however, while regression analysis can test a priori specified effects, it cannot capture unspecified inter-relationship across factors. To address this limitation, we opted for machine learning methods, which identify hidden and complex patterns and relationships in the data. Our results indicate that the fear of the husband is the most critical factor in determining the experience of IPV. In other words, the risk of IPV in South Africa is associated more with the husband or partner's characteristics than the woman's. The models developed in this study can be used to develop interventions by different stakeholders such as social workers, policymakers, and or other interested partners.

Amusa Lateef B, Bengesai Annah V, Khan Hafiz T A

2020-Sep-25

South Africa, decision tree, intimate partner violence, machine learning

General General

Rab7D small GTPase is involved in phago-, trogocytosis and cytoskeletal reorganization in the enteric protozoan Entamoeba histolytica.

In Cellular microbiology

Rab small GTPases regulate membrane traffic between distinct cellular compartments of all eukaryotes in a tempo-spatially specific fashion. Rab small GTPases are also involved in the regulation of cytoskeleton and signaling. Membrane traffic and cytoskeletal regulation play pivotal role in the pathogenesis of Entamoeba histolytica, which is a protozoan parasite responsible for human amebiasis. E. histolytica is unique in that its genome encodes over one hundred Rab proteins, containing multiple isotypes of conserved members (e.g., Rab7) and Entamoeba-specific subgroups (e.g., RabA, B, and X). Among them, E. histolytica Rab7 is the most diversified group consisting of nine isotypes. While it was previously demonstrated that EhRab7A and EhRab7B are involved in lysosome and phagosome biogenesis, the individual roles of other Rab7 members and their coordination remain elusive. In this study, we characterized the third member of Rab7, Rab7D, to better understand the significance of the multiplicity of Rab7 isotypes in E. histolytica. Overexpression of EhRab7D caused reduction in phagocytosis of erythrocytes, trogocytosis (meaning nibbling or chewing of a portion) of live mammalian cells, and phagosome acidification and maturation. Conversely, transcriptional gene silencing of EhRab7D gene caused opposite phenotypes in phago/trogocytosis and phagosome maturation. Furthermore, EhRab7D gene silencing caused reduction in the attachment to and the motility on the collagen-coated surface.Image analysis showed that EhRab7D was occasionally associated with lysosomes and prephagosomal vacuoles, but not with mature phagosomes and trogosomes. Finally, in silico prediction of structural organization of EhRab7 isotypes identified unique amino acid changes on the effector binding surface of EhRab7D. Taken together, our data suggest that EhRab7D plays coordinated counteracting roles: a inhibitory role in phago/trogocytosis and lyso/phago/trogosome biogenesis, and an stimulatory role in adherence and motility, presumably via interaction with unique effectors. Finally, we propose the model in which three EhRab7 isotypes are sequentially involved in phago/trogocytosis. This article is protected by copyright. All rights reserved.

Saito-Nakano Yumiko, Wahyuni Ratna, Nakada-Tsukui Kumiko, Tomii Kentaro, Nozaki Tomoyoshi

2020-Sep-25

Cytoskeleton, Entamoeba histolytica, Lysosome, Pathogenesis, Phagocytosis, Rab7D, Trogocytosis, Vesicular traffic

General General

Enhancing scientific discoveries in molecular biology with deep generative models.

In Molecular systems biology

Generative models provide a well-established statistical framework for evaluating uncertainty and deriving conclusions from large data sets especially in the presence of noise, sparsity, and bias. Initially developed for computer vision and natural language processing, these models have been shown to effectively summarize the complexity that underlies many types of data and enable a range of applications including supervised learning tasks, such as assigning labels to images; unsupervised learning tasks, such as dimensionality reduction; and out-of-sample generation, such as de novo image synthesis. With this early success, the power of generative models is now being increasingly leveraged in molecular biology, with applications ranging from designing new molecules with properties of interest to identifying deleterious mutations in our genomes and to dissecting transcriptional variability between single cells. In this review, we provide a brief overview of the technical notions behind generative models and their implementation with deep learning techniques. We then describe several different ways in which these models can be utilized in practice, using several recent applications in molecular biology as examples.

Lopez Romain, Gayoso Adam, Yosef Nir

2020-Sep

deep generative models, molecular biology, neural networks

Radiology Radiology

Squamous Cell Carcinoma and Lymphoma of the Oropharynx: Differentiation Using a Radiomics Approach.

In Yonsei medical journal

The purpose of this study was to evaluate the diagnostic performance of magnetic resonance (MR) radiomics-based machine learning algorithms in differentiating squamous cell carcinoma (SCC) from lymphoma in the oropharynx. MR images from 87 patients with oropharyngeal SCC (n=68) and lymphoma (n=19) were reviewed retrospectively. Tumors were semi-automatically segmented on contrast-enhanced T1-weighted images registered to T2-weighted images, and radiomic features (n=202) were extracted from contrast-enhanced T1- and T2-weighted images. The radiomics classifier was built using elastic-net regularized generalized linear model analyses with nested five-fold cross-validation. The diagnostic abilities of the radiomics classifier and visual assessment by two head and neck radiologists were evaluated using receiver operating characteristic (ROC) analyses for distinguishing SCC from lymphoma. Nineteen radiomics features were selected at least twice during the five-fold cross-validation. The mean area under the ROC curve (AUC) of the radiomics classifier was 0.750 [95% confidence interval (CI), 0.613-0.887], with a sensitivity of 84.2%, specificity of 60.3%, and an accuracy of 65.5%. Two human readers yielded AUCs of 0.613 (95% CI, 0.467-0.759) and 0.663 (95% CI, 0.531-0.795), respectively. The radiomics-based machine learning model can be useful for differentiating SCC from lymphoma of the oropharynx.

Bae Sohi, Choi Yoon Seong, Sohn Beomseok, Ahn Sung Soo, Lee Seung Koo, Yang Jaemoon, Kim Jinna

2020-Oct

Radiomics, lymphoma, magnetic resonance imaging, oropharynx, squamous cell carcinoma

General General

Elective Caesarean Section and Bronchiolitis Hospitalization: A Retrospective Cohort Study.

In Pediatric allergy and immunology : official publication of the European Society of Pediatric Allergy and Immunology

BACKGROUND : We sought to evaluate whether elective Caesarean section is associated with subsequent hospitalization for bronchiolitis.

METHODS : This is a retrospective cohort study which used the electronic medical record database of Clalit Health Services, the largest healthcare fund in Israel, serving over 4.5 million members and over half of the total population. The primary outcome was bronchiolitis admission in the first two years of life. We performed logistic regression analyses to identify independent associations. We repeated the analysis using boosted decision tree machine learning techniques to confirm our findings.

RESULTS : There were 124,553 infants enrolled between 2008-2010 and 5,168 (4.1%) were hospitalized for bronchiolitis in the first two years of life. In logistic regression models stratified by seasons, elective Caesarean section birth was associated with 15% increased odds (95% CI: 1.02-1.30) for infants born in the fall season, 28% increased odds (95% CI: 1.11, 1.47) in the winter, 35% increased odds (95% CI: 1.12-1.62) in the spring, and 37% increased odds (95% CI: 1.18-1.60) in the summer. In the boosted gradient decision tree analysis, the area under the curve for risk of bronchiolitis admission was 0.663 (95% CI: 0.652, 0.674) with timing of birth as the most important feature.

CONCLUSION : Elective Caesarean section, a potentially modifiable risk factor, is associated with increased odds of hospitalization for bronchiolitis in the first two years of life. These data should be considered when scheduling elective Caesarean sections especially for infants born in spring and summer months.

Douglas Lindsey C, Leventer-Roberts Maya, Levinkron Ohad, Wilson Karen M

2020-Sep-24

Bronchiolitis, Caesarean section, Risk Factors

General General

Genome-wide association study-based deep learning for survival prediction.

In Statistics in medicine

Informative and accurate survival prediction with individualized dynamic risk profiles over time is critical for personalized disease prevention and clinical management. The massive genetic data, such as SNPs from genome-wide association studies (GWAS), together with well-characterized time-to-event phenotypes provide unprecedented opportunities for developing effective survival prediction models. Recent advances in deep learning have made extraordinary achievements in establishing powerful prediction models in the biomedical field. However, the applications of deep learning approaches in survival prediction are limited, especially with utilizing the wealthy GWAS data. Motivated by developing powerful prediction models for the progression of an eye disease, age-related macular degeneration (AMD), we develop and implement a multilayer deep neural network (DNN) survival model to effectively extract features and make accurate and interpretable predictions. Various simulation studies are performed to compare the prediction performance of the DNN survival model with several other machine learning-based survival models. Finally, using the GWAS data from two large-scale randomized clinical trials in AMD with over 7800 observations, we show that the DNN survival model not only outperforms several existing survival prediction models in terms of prediction accuracy (eg, c-index =0.76), but also successfully detects clinically meaningful risk subgroups by effectively learning the complex structures among genetic variants. Moreover, we obtain a subject-specific importance measure for each predictor from the DNN survival model, which provides valuable insights into the personalized early prevention and clinical management for this disease.

Sun Tao, Wei Yue, Chen Wei, Ding Ying

2020-Sep-24

AMD progression, GWAS, deep learning, predictor importance, survival prediction

General General

Positron Emission Tomography for Response Evaluation in Microenvironment-Targeted Anti-Cancer Therapy.

In Biomedicines

Therapeutic response is evaluated using the diameter of tumors and quantitative parameters of 2-[18F] fluoro-2-deoxy-d-glucose positron emission tomography (FDG-PET). Tumor response to molecular-targeted drugs and immune checkpoint inhibitors is different from conventional chemotherapy in terms of temporal metabolic alteration and morphological change after the therapy. Cancer stem cells, immunologically competent cells, and metabolism of cancer are considered targets of novel therapy. Accumulation of FDG reflects the glucose metabolism of cancer cells as well as immune cells in the tumor microenvironment, which differs among patients according to the individual immune function; however, FDG-PET could evaluate the viability of the tumor as a whole. On the other hand, specific imaging and cell tracking of cancer cell or immunological cell subsets does not elucidate tumor response in a complexed interaction in the tumor microenvironment. Considering tumor heterogeneity and individual variation in therapeutic response, a radiomics approach with quantitative features of multimodal images and deep learning algorithm with reference to pathologic and genetic data has the potential to improve response assessment for emerging cancer therapy.

Oriuchi Noboru, Sugawara Shigeyasu, Shiga Tohru

2020-Sep-22

FDG-PET/CT, artificial intelligence, cancer stem cell, immunotherapy, radiomics, theranostics, therapeutic evaluation, tumor microenvironment

General General

predCOVID-19: A Systematic Study of Clinical Predictive Models for Coronavirus Disease 2019.

In Journal of medical Internet research ; h5-index 88.0

BACKGROUND : Coronavirus Disease 2019 (COVID-19) is a rapidly emerging respiratory disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Due to the rapid human-to-human transmission of SARS-CoV-2, many healthcare systems are at risk of exceeding their healthcare capacities, in particular in terms of SARS-CoV-2 tests, hospital and intensive care unit (ICU) beds and mechanical ventilators. Predictive algorithms could potentially ease the strain on healthcare systems by identifying those who are most likely to receive a positive SARS-CoV-2 test, be hospitalised or admitted to the ICU.

OBJECTIVE : To develop, study and evaluate clinical predictive models that estimate, using machine learning and based on routinely collected clinical data, which patients are likely to receive a positive SARS-CoV-2 test, require hospitalisation or intensive care.

METHODS : Using a systematic approach to model development and optimisation, we train and compare various types of machine learning models, including logistic regression, neural networks, support vector machines, random forests, and gradient boosting. To evaluate the developed models, we perform a retrospective evaluation on demographic, clinical and blood analysis data from a cohort of 5644 patients. In addition, we determine which clinical features are predictive to what degree for each of the aforementioned clinical tasks using causal explanations.

RESULTS : Our experimental results indicate that our predictive models identify (i) patients that test positive for SARS-CoV-2 a priori at a sensitivity of 75% (95% confidence interval [CI]: 67%, 81%) and a specificity of 49% (95% CI: 46%, 51%), (ii) SARS-CoV-2 positive patients that require hospitalisation with 0.92 area under the receiver operator characteristic curve [AUC] (95% CI: 0.81, 0.98), and (iii) SARS-CoV-2 positive patients that require critical care with 0.98 AUC (95% CI: 0.95, 1.00).

CONCLUSIONS : Our results indicate that predictive models trained on routinely collected clinical data could be used to predict clinical pathways for COVID-19, and therefore help inform care and prioritise resources.

CLINICALTRIAL :

Schwab Patrick, Schütte DuMont August, Dietz Benedikt, Bauer Stefan

2020-Sep-14

Dermatology Dermatology

Applications and Future Directions for Optical Coherence Tomography in Dermatology.

In The British journal of dermatology

Optical coherence tomography (OCT) is a non-invasive optical imaging method that can generate high-resolution en-face and cross-sectional images of the skin in vivo to a maximum depth of 2mm. Whilst OCT holds considerable potential for non-invasive diagnosis and disease monitoring, it is poorly understood by many dermatologists. Here, we aim to equip the practicing dermatologist with an understanding of the principles of skin OCT and the potential clinical indications. We begin with an introduction to the technology and discuss the different modalities of OCT including angiographic (dynamic) OCT, which can image cutaneous blood vessels at high resolution. Next we review clinical applications. OCT has been most extensively investigated in the diagnosis of keratinocyte carcinomas, particularly basal cell carcinoma (BCC). To date, OCT has not proven sufficiently accurate for the robust diagnosis of malignant melanoma, however the evaluation of abnormal vasculature with angiographic OCT is an area of active investigation. OCT and in particular angiographic OCT also show promise in monitoring the response of inflammatory dermatoses, such as psoriasis and connective tissues disease to therapy. We additionally discuss a potential role for artificial intelligence in improving the accuracy of interpretation of OCT imaging data.

Wan B, Ganier C, Du-Harpur X, Harun N, Watt F M, Patalay R, Lynch M D

2020-Sep-24

oncology Oncology

Patient generated health data and electronic health record integration in oncologic surgery: A call for artificial intelligence and machine learning.

In Journal of surgical oncology ; h5-index 45.0

In this review, we aim to assess the current state of science in relation to the integration of patient-generated health data (PGHD) and patient-reported outcomes (PROs) into routine clinical care with a focus on surgical oncology populations. We will also describe the critical role of artificial intelligence and machine-learning methodology in the efficient translation of PGHD, PROs, and traditional outcome measures into meaningful patient care models.

Melstrom Laleh G, Rodin Andrei S, Rossi Lorenzo A, Fu Paul, Fong Yuman, Sun Virginia

2020-Sep-24

artificial intelligence, biometrics, machine learning in surgery, patient-reported outcomes, telehealth

General General

Correcting and Reweighting False Label Masks in Brain Tumor Segmentation.

In Medical physics ; h5-index 59.0

PURPOSE : Recently, brain tumor segmentation has made important progress. How-ever, the quality of manual labels plays an important role in the performance, while in practice, it could vary greatly and in turn could substantially mislead the learning process and decrease the accuracy. We need to design a mechanism to combine label correction and sample reweighting to improve the effectiveness of brain tumor segmentation.

METHODS : We propose a novel sample reweighting and label refinement method, and a novel 3D generative adversarial network (GAN) is introduced to combine these two models into an united framework.

RESULTS : Extensive experiments on the BraTS19 dataset have demonstrated that our approach obtains competitive results when compared with other state-of-the-art approaches when handling the false labels in brain tumor segmentation.

CONCLUSIONS : The 3D GAN-based approach is an effective approach to handle false label masks by simultaneously applying label correction and sample reweighting. Our method is robust to variations in tumor shape and background clutter.

Cheng Guohua, Ji Hongli, He Linyang

2020-Sep-24

brain tumor segmentation, deep learning, generative adversarial network, volume segmentation

Surgery Surgery

Deep Learning Model for the Automated Detection and Histopathological Prediction of Meningioma.

In Neuroinformatics

The volumetric assessment and accurate grading of meningiomas before surgery are highly relevant for therapy planning and prognosis prediction. This study was to design a deep learning algorithm and evaluate the performance in detecting meningioma lesions and grade classification. In total, 5088 patients with histopathologically confirmed meningioma were retrospectively included. The pyramid scene parsing network (PSPNet) was trained to automatically detect and delineate the meningiomas. The results were compared to manual segmentations by evaluating the mean intersection over union (mIoU). The performance of grade classification was evaluated by accuracy. For the automated detection and segmentation of meningiomas, the mean pixel accuracy, tumor accuracy, background accuracy and mIoU were 99.68%, 81.36%, 99.88% and 81.36% for all patients; 99.52%, 84.86%, 99.93% and 84.86% for grade I meningiomas; 99.57%, 80.11%, 99.92% and 80.12% for grade II meningiomas; and 99.75%, 78.40%, 99.99% and 78.40% for grade III meningiomas, respectively. For grade classification, the accuracy values of the training and test datasets were 99.93% and 81.52% for all patients; 99.98% and 98.51% for grade I meningiomas; 99.91% and 66.67% for grade II meningiomas; and 99.88% and 73.91% for grade III meningiomas, respectively. The automated detection, segmentation and grade classification of meningiomas based on deep learning were accurate and reliable and may improve the monitoring and treatment of this frequently occurring tumor entity. Furthermore, the method could function as a useful tool for preassessment and preselection for radiologists, offering auxiliary information for clinical decision making in presurgical evaluation.

Zhang Hua, Mo Jiajie, Jiang Han, Li Zhuyun, Hu Wenhan, Zhang Chao, Wang Yao, Wang Xiu, Liu Chang, Zhao Baotian, Zhang Jianguo, Zhang Kai

2020-Sep-25

Deep learning, Delineation, Grade classification, Meningiomas, PSPNet

Radiology Radiology

"Renal emergencies: a comprehensive pictorial review with MR imaging".

In Emergency radiology

Superior soft-tissue contrast and high sensitivity of magnetic resonance imaging (MRI) for detecting and characterizing disease may provide an expanded role in acute abdominal and pelvic imaging. Although MRI has traditionally not been exploited in acute care settings, commonly used in biliary obstruction and during pregnancy, there are several conditions in which MRI can go above and beyond other modalities in diagnosis, characterization, and providing functional and prognostic information. In this manuscript, we highlight how MRI can help in further assessment and characterization of acute renal emergencies. Currently, renal emergencies are predominantly evaluated with ultrasound (US) or computed tomography (CT) scanning. US may be limited by various patient factors and technologist experience while CT imaging with intravenous contrast administration can further compromise renal function. With the advent of rapid, robust non-contrast MRI, and magnetic resonance angiography (MRA) imaging studies with short scan times, free-breathing techniques, and lack of ionization radiation, the utility of MRI for renal evaluation might be superior to CT not only in diagnosing an emergent renal process but also by providing functional and prognostic information. This review outlines the clinical manifestations and the key imaging findings for acute renal processes including acute renal infarction, hemorrhage, and renal obstruction, among other entities, to highlight the added value of MRI in evaluating the finer nuances in acute renal emergencies.

Gopireddy Dheeraj Reddy, Mahmoud Hagar, Baig Saif, Le Rebecca, Bhosale Priya, Lall Chandana

2020-Sep-25

Emergency, Kidney, Magnetic resonance imaging, Renal hemorrhage

Radiology Radiology

Fully automated detection of primary sclerosing cholangitis (PSC)-compatible bile duct changes based on 3D magnetic resonance cholangiopancreatography using machine learning.

In European radiology ; h5-index 62.0

OBJECTIVES : To develop and evaluate a deep learning algorithm for fully automated detection of primary sclerosing cholangitis (PSC)-compatible cholangiographic changes on three-dimensional magnetic resonance cholangiopancreatography (3D-MRCP) images.

METHODS : The datasets of 428 patients (n = 205 with confirmed diagnosis of PSC; n = 223 non-PSC patients) referred for MRI including MRCP were included in this retrospective IRB-approved study. Datasets were randomly assigned to a training (n = 386) and a validation group (n = 42). For each case, 20 uniformly distributed axial MRCP rotations and a subsequent maximum intensity projection (MIP) were calculated, resulting in a training database of 7720 images and a validation database of 840 images. Then, a pre-trained Inception ResNet was implemented which was conclusively fine-tuned (learning rate 10-3).

RESULTS : Applying an ensemble strategy (by binning of the 20 axial projections), the mean absolute error (MAE) of the developed deep learning algorithm for detection of PSC-compatible cholangiographic changes was lowered from 21 to 7.1%. Sensitivity, specificity, positive predictive (PPV), and negative predictive value (NPV) for detection of these changes were 95.0%, 90.9%, 90.5%, and 95.2% respectively.

CONCLUSIONS : The results of this study demonstrate the feasibility of transfer learning in combination with extensive image augmentation to detect PSC-compatible cholangiographic changes on 3D-MRCP images with a high sensitivity and a low MAE. Further validation with more and multicentric data is now desirable, as it is known that neural networks tend to overfit the characteristics of the dataset.

KEY POINTS : • The described machine learning algorithm is able to detect PSC-compatible cholangiographic changes on 3D-MRCP images with high accuracy. • The generation of 2D projections from 3D datasets enabled the implementation of an ensemble strategy to boost inference performance.

Ringe Kristina I, Vo Chieu Van Dai, Wacker Frank, Lenzen Henrike, Manns Michael P, Hundt Christian, Schmidt Bertil, Winther Hinrich B

2020-Sep-24

Cholangiography, Deep learning, Machine learning, Sclerosing cholangitis

Ophthalmology Ophthalmology

Machine learning helps improve diagnostic ability of subclinical keratoconus using Scheimpflug and OCT imaging modalities.

In Eye and vision (London, England)

Purpose : To develop an automated classification system using a machine learning classifier to distinguish clinically unaffected eyes in patients with keratoconus from a normal control population based on a combination of Scheimpflug camera images and ultra-high-resolution optical coherence tomography (UHR-OCT) imaging data.

Methods : A total of 121 eyes from 121 participants were classified by 2 cornea experts into 3 groups: normal (50 eyes), with keratoconus (38 eyes) or with subclinical keratoconus (33 eyes). All eyes were imaged with a Scheimpflug camera and UHR-OCT. Corneal morphological features were extracted from the imaging data. A neural network was used to train a model based on these features to distinguish the eyes with subclinical keratoconus from normal eyes. Fisher's score was used to rank the differentiable power of each feature. The receiver operating characteristic (ROC) curves were calculated to obtain the area under the ROC curves (AUCs).

Results : The developed classification model used to combine all features from the Scheimpflug camera and UHR-OCT dramatically improved the differentiable power to discriminate between normal eyes and eyes with subclinical keratoconus (AUC = 0.93). The variation in the thickness profile within each individual in the corneal epithelium extracted from UHR-OCT imaging ranked the highest in differentiating eyes with subclinical keratoconus from normal eyes.

Conclusion : The automated classification system using machine learning based on the combination of Scheimpflug camera data and UHR-OCT imaging data showed excellent performance in discriminating eyes with subclinical keratoconus from normal eyes. The epithelial features extracted from the OCT images were the most valuable in the discrimination process. This classification system has the potential to improve the differentiable power of subclinical keratoconus and the efficiency of keratoconus screening.

Shi Ce, Wang Mengyi, Zhu Tiantian, Zhang Ying, Ye Yufeng, Jiang Jun, Chen Sisi, Lu Fan, Shen Meixiao

2020

Combined-devices, Machine learning, Scheimpflug camera, Subclinical keratoconus, Ultra-high resolution optical coherence tomography

Surgery Surgery

The Role of Machine Learning in Spine Surgery: The Future Is Now.

In Frontiers in surgery

The recent influx of machine learning centered investigations in the spine surgery literature has led to increased enthusiasm as to the prospect of using artificial intelligence to create clinical decision support tools, optimize postoperative outcomes, and improve technologies used in the operating room. However, the methodology underlying machine learning in spine research is often overlooked as the subject matter is quite novel and may be foreign to practicing spine surgeons. Improper application of machine learning is a significant bioethics challenge, given the potential consequences of over- or underestimating the results of such studies for clinical decision-making processes. Proper peer review of these publications requires a baseline familiarity of the language associated with machine learning, and how it differs from classical statistical analyses. This narrative review first introduces the overall field of machine learning and its role in artificial intelligence, and defines basic terminology. In addition, common modalities for applying machine learning, including classification and regression decision trees, support vector machines, and artificial neural networks are examined in the context of examples gathered from the spine literature. Lastly, the ethical challenges associated with adapting machine learning for research related to patient care, as well as future perspectives on the potential use of machine learning in spine surgery, are discussed specifically.

Chang Michael, Canseco Jose A, Nicholson Kristen J, Patel Neil, Vaccaro Alexander R

2020

artificial intelligence, deep learning, machine learning, orthopedic surgery, spine surgery

Surgery Surgery

Lung Mechanics of Mechanically Ventilated Patients With COVID-19: Analytics With High-Granularity Ventilator Waveform Data.

In Frontiers in medicine

Background: Lung mechanics during invasive mechanical ventilation (IMV) for both prognostic and therapeutic implications; however, the full trajectory lung mechanics has never been described for novel coronavirus disease 2019 (COVID-19) patients requiring IMV. The study aimed to describe the full trajectory of lung mechanics of mechanically ventilated COVID-19 patients. The clinical and ventilator setting that can influence patient-ventilator asynchrony (PVA) and compliance were explored. Post-extubation spirometry test was performed to assess the pulmonary function after COVID-19 induced ARDS. Methods: This was a retrospective study conducted in a tertiary care hospital. All patients with IMV due to COVID-19 induced ARDS were included. High-granularity ventilator waveforms were analyzed with deep learning algorithm to obtain PVAs. Asynchrony index (AI) was calculated as the number of asynchronous events divided by the number of ventilator cycles and wasted efforts. Mortality was recorded as the vital status on hospital discharge. Results: A total of 3,923,450 respiratory cycles in 2,778 h were analyzed (average: 24 cycles/min) for seven patients. Higher plateau pressure (Coefficient: -0.90; 95% CI: -1.02 to -0.78) and neuromuscular blockades (Coefficient: -6.54; 95% CI: -9.92 to -3.16) were associated with lower AI. Survivors showed increasing compliance over time, whereas non-survivors showed persistently low compliance. Recruitment maneuver was not able to improve lung compliance. Patients were on supine position in 1,422 h (51%), followed by prone positioning (499 h, 18%), left positioning (453 h, 16%), and right positioning (404 h, 15%). As compared with supine positioning, prone positioning was associated with 2.31 ml/cmH2O (95% CI: 1.75 to 2.86; p < 0.001) increase in lung compliance. Spirometry tests showed that pulmonary functions were reduced to one third of the predicted values after extubation. Conclusions: The study for the first time described full trajectory of lung mechanics of patients with COVID-19. The result showed that prone positioning was associated with improved compliance; higher plateau pressure and use of neuromuscular blockades were associated with lower risk of AI.

Ge Huiqing, Pan Qing, Zhou Yong, Xu Peifeng, Zhang Lingwei, Zhang Junli, Yi Jun, Yang Changming, Zhou Yuhan, Liu Limin, Zhang Zhongheng

2020

COVID-19, asynchonized, asynchrony, lung mechanics, mechanical ventilation, prone positioning

oncology Oncology

Corrigendum: Deep Learning for Whole Slide Image Analysis: An Overview.

In Frontiers in medicine

[This corrects the article on p. 264 in vol. 6, PMID: 31824952.].

Dimitriou Neofytos, Arandjelović Ognjen, Caie Peter D

2020

cancer, computer vision, digital pathology, image analysis, machine learning, oncology, personalized pathology

oncology Oncology

Unveiling COVID-19 from CHEST X-Ray with Deep Learning: A Hurdles Race with Small Data.

In International journal of environmental research and public health ; h5-index 73.0

The possibility to use widespread and simple chest X-ray (CXR) imaging for early screening of COVID-19 patients is attracting much interest from both the clinical and the AI community. In this study we provide insights and also raise warnings on what is reasonable to expect by applying deep learning to COVID classification of CXR images. We provide a methodological guide and critical reading of an extensive set of statistical results that can be obtained using currently available datasets. In particular, we take the challenge posed by current small size COVID data and show how significant can be the bias introduced by transfer-learning using larger public non-COVID CXR datasets. We also contribute by providing results on a medium size COVID CXR dataset, just collected by one of the major emergency hospitals in Northern Italy during the peak of the COVID pandemic. These novel data allow us to contribute to validate the generalization capacity of preliminary results circulating in the scientific community. Our conclusions shed some light into the possibility to effectively discriminate COVID using CXR.

Tartaglione Enzo, Barbano Carlo Alberto, Berzovini Claudio, Calandri Marco, Grangetto Marco

2020-Sep-22

COVID-19, chest X-ray, classification, deep learning

Surgery Surgery

Lung Mechanics of Mechanically Ventilated Patients With COVID-19: Analytics With High-Granularity Ventilator Waveform Data.

In Frontiers in medicine

Background: Lung mechanics during invasive mechanical ventilation (IMV) for both prognostic and therapeutic implications; however, the full trajectory lung mechanics has never been described for novel coronavirus disease 2019 (COVID-19) patients requiring IMV. The study aimed to describe the full trajectory of lung mechanics of mechanically ventilated COVID-19 patients. The clinical and ventilator setting that can influence patient-ventilator asynchrony (PVA) and compliance were explored. Post-extubation spirometry test was performed to assess the pulmonary function after COVID-19 induced ARDS. Methods: This was a retrospective study conducted in a tertiary care hospital. All patients with IMV due to COVID-19 induced ARDS were included. High-granularity ventilator waveforms were analyzed with deep learning algorithm to obtain PVAs. Asynchrony index (AI) was calculated as the number of asynchronous events divided by the number of ventilator cycles and wasted efforts. Mortality was recorded as the vital status on hospital discharge. Results: A total of 3,923,450 respiratory cycles in 2,778 h were analyzed (average: 24 cycles/min) for seven patients. Higher plateau pressure (Coefficient: -0.90; 95% CI: -1.02 to -0.78) and neuromuscular blockades (Coefficient: -6.54; 95% CI: -9.92 to -3.16) were associated with lower AI. Survivors showed increasing compliance over time, whereas non-survivors showed persistently low compliance. Recruitment maneuver was not able to improve lung compliance. Patients were on supine position in 1,422 h (51%), followed by prone positioning (499 h, 18%), left positioning (453 h, 16%), and right positioning (404 h, 15%). As compared with supine positioning, prone positioning was associated with 2.31 ml/cmH2O (95% CI: 1.75 to 2.86; p < 0.001) increase in lung compliance. Spirometry tests showed that pulmonary functions were reduced to one third of the predicted values after extubation. Conclusions: The study for the first time described full trajectory of lung mechanics of patients with COVID-19. The result showed that prone positioning was associated with improved compliance; higher plateau pressure and use of neuromuscular blockades were associated with lower risk of AI.

Ge Huiqing, Pan Qing, Zhou Yong, Xu Peifeng, Zhang Lingwei, Zhang Junli, Yi Jun, Yang Changming, Zhou Yuhan, Liu Limin, Zhang Zhongheng

2020

COVID-19, asynchonized, asynchrony, lung mechanics, mechanical ventilation, prone positioning

General General

Infections in the Era of Targeted Therapies: Mapping the Road Ahead.

In Frontiers in medicine

Immunosuppressive treatment strategies for autoimmune diseases have changed drastically with the development of targeted therapies. While targeted therapies have changed the way we manage immune mediated diseases, their use has been attended by a variety of infectious complications-some expected, others unexpected. This perspective examines lessons learned from the use of different targeted therapies over the past several decades, and reviews existing strategies to minimize infectious risk. Several of these infectious complications were predictable in the light of preclinical models and early clinical trials (i.e., tuberculosis and TNF inhibitors; meningococcus; and eculizumab). While these scenarios can potentially help us in terms of enhancing our predictive powers (higher vigilance, earlier detection, and risk mitigation), targeted therapies have also revealed unpredictable toxicities (i.e., natalizumab and progressive multifocal leukoencephalopathy). Severe infectious complications, even if rare, can derail a promising therapeutic and highlight the need for increased awareness and meticulous adjudication. Tools are available to help mitigate infectious risks. The first step is to ensure that infection safety is adequately studied at every level of drug development prior to regulatory approval, with adequate post-marketing surveillance including registries that collect real-world adverse events in a collaborative effort. The second step is to identify high risk patients (using risk calculators such as the RABBIT risk score; big data analyses; artificial intelligence). Finally, the most underutilized interventions to prevent severe infections in patients receiving targeted therapies across the spectrum of immune mediated inflammatory diseases are vaccinations.

Calabrese Leonard H, Calabrese Cassandra, Lenfant Tiphaine, Kirchner Elizabeth, Strand Vibeke

2020

PML, TNF inhibitors, infection, natalizumab, targeted therapies, tuberculosis, vaccine

General General

A multi-class classification model for supporting the diagnosis of type II diabetes mellitus.

In PeerJ

Background : Numerous studies have utilized machine-learning techniques to predict the early onset of type 2 diabetes mellitus. However, fewer studies have been conducted to predict an appropriate diagnosis code for the type 2 diabetes mellitus condition. Further, ensemble techniques such as bagging and boosting have likewise been utilized to an even lesser extent. The present study aims to identify appropriate diagnosis codes for type 2 diabetes mellitus patients by means of building a multi-class prediction model which is both parsimonious and possessing minimum features. In addition, the importance of features for predicting diagnose code is provided.

Methods : This study included 149 patients who have contracted type 2 diabetes mellitus. The sample was collected from a large hospital in Taiwan from November, 2017 to May, 2018. Machine learning algorithms including instance-based, decision trees, deep neural network, and ensemble algorithms were all used to build the predictive models utilized in this study. Average accuracy, area under receiver operating characteristic curve, Matthew correlation coefficient, macro-precision, recall, weighted average of precision and recall, and model process time were subsequently used to assess the performance of the built models. Information gain and gain ratio were used in order to demonstrate feature importance.

Results : The results showed that most algorithms, except for deep neural network, performed well in terms of all performance indices regardless of either the training or testing dataset that were used. Ten features and their importance to determine the diagnosis code of type 2 diabetes mellitus were identified. Our proposed predictive model can be further developed into a clinical diagnosis support system or integrated into existing healthcare information systems. Both methods of application can effectively support physicians whenever they are diagnosing type 2 diabetes mellitus patients in order to foster better patient-care planning.

Kuo Kuang-Ming, Talley Paul, Kao YuHsi, Huang Chi Hsien

2020

Diagnosis, Machine-learning techniques, Predictive models, Type 2 diabetes mellitus

Ophthalmology Ophthalmology

Development of Deep Learning Models to Predict Best-Corrected Visual Acuity from Optical Coherence Tomography.

In Translational vision science & technology

Purpose : To develop deep learning (DL) models to predict best-corrected visual acuity (BCVA) from optical coherence tomography (OCT) images from patients with neovascular age-related macular degeneration (nAMD).

Methods : Retrospective analysis of OCT images and associated BCVA measurements from the phase 3 HARBOR trial (NCT00891735). DL regression models were developed to predict BCVA at the concurrent visit and 12 months from baseline using OCT images. Binary classification models were developed to predict BCVA of Snellen equivalent of <20/40, <20/60, and ≤20/200 at the concurrent visit and 12 months from baseline.

Results : The regression model to predict BCVA at the concurrent visit had R2 = 0.67 (root-mean-square error [RMSE] = 8.60) in study eyes and R2 = 0.84 (RMSE = 9.01) in fellow eyes. The best classification model to predict BCVA at the concurrent visit had an area under the receiver operating characteristic curve (AUC) of 0.92 in study eyes and 0.98 in fellow eyes. The regression model to predict BCVA at month 12 using baseline OCT had R2 = 0.33 (RMSE = 14.16) in study eyes and R2 = 0.75 (RMSE = 11.27) in fellow eyes. The best classification model to predict BCVA at month 12 had AUC = 0.84 in study eyes and AUC = 0.96 in fellow eyes.

Conclusions : DL shows promise in predicting BCVA from OCTs in nAMD. Further research should elucidate the utility of models in clinical settings.

Translational Relevance : DL models predicting BCVA could be used to enhance understanding of structure-function relationships and develop more efficient clinical trials.

Kawczynski Michael G, Bengtsson Thomas, Dai Jian, Hopkins J Jill, Gao Simon S, Willis Jeffrey R

2020-Sep

deep learning, neovascular age-related macular degeneration, ocular imaging, public health ophthalmology, tele-ophthalmology

General General

Web-based Fully Automated Cephalometric Analysis: Comparisons between App-aided, Computerized, and Manual Tracings.

In Turkish journal of orthodontics

Objective : To compare the accuracy of cephalometric analyses made with fully automated tracings, computerized tracing, and app-aided tracings with equivalent hand-traced measurements, and to evaluate the tracing time for each cephalometric analysis method.

Methods : Pre-treatment lateral cephalometric radiographs of 40 patients were randomly selected. Eight angular and 4 linear parameters were measured by 1 operator using 3 methods: computerized tracing with software Dolphin Imaging 13.01(Dolphin Imaging and Management Solutions, Chatsworth, Calif, USA), app-aided tracing using the CephNinja 3.51 app (Cyncronus LLC, WA, USA), and web-based fully automated tracing with CephX (ORCA Dental AI, Las Vegas, NV). Correction of CephX landmarks was also made. Manual tracings were performed by 3 operators. Remeasurement of 15 radiographs was carried out to determine the intra-examiner and inter-examiner (manual tracings) correlation coefficient (ICC). Inter-group comparisons were made with one-way analysis of variance. The Tukey test was used for post hoc testing.

Results : Overall, greater variability was found with CephX compared with the other methods. Differences in GoGn-SN (°), I-NA (°), I-NB (°), I-NA (mm), and I-NB (mm) were statistically (p<0.05) and clinically significant using CephX, whereas CephNinja and Dolphin were comparable to manual tracings. Correction of CephX landmarks gave similar results to CephNinja and Dolphin. All the ICCs exceeded 0.85, except for I-NA (°), I-NB (°), and I-NB (mm), which were traced with CephX. The shortest analyzing time was obtained with CephX.

Conclusion : Fully automatic analysis with CephX needs to be more reliable. However, CephX analysis with manual correction is promising for use in clinical practice because it is comparable to CephNinja and Dolphin, and the analyzing time is significantly shorter.

Meriç Pamir, Naoumova Julia

2020-Sep

Apps, artificial intelligence, automated identification, automatic tracing, cephalometric, computerized tracing, web-based

General General

Application of non-mydriatic fundus examination and artificial intelligence to promote the screening of diabetic retinopathy in the endocrine clinic: an observational study of T2DM patients in Tianjin, China.

In Therapeutic advances in chronic disease

Background : We aimed to determine the role of non-mydriatic fundus examination and artificial intelligence (AI) in screening diabetic retinopathy (DR) in patients with diabetes in the Metabolic Disease Management Center (MMC) in Tianjin, China.

Methods : Adult patients with type 2 diabetes mellitus who were first treated by MMC in Tianjin First Central Hospital and Tianjin 4th Center Hospital were divided into two groups according to the time that MMC was equipped with the non-mydriatic ophthalmoscope and AI system and could complete fundus examination independently (the former was the control group, the latter was the observation group). The observation indices were as follows: the incidence of DR, the fundus screening rate of the two groups, and fundus screening of diabetic patients with different course of disease.

Results : A total of 5039 patients were enrolled in this study. The incidence rate of DR was 18.6%, 29.8%, and 49.6% in patients with diabetes duration of ⩽1 year, 1-5 years, and >5 years, respectively. The screening rate of fundus in the observation group was significantly higher compared with the control group (81.3% versus 28.4%, χ2 = 1430.918, p < 0.001). The DR screening rate of the observation group was also significantly higher compared with the control group in patients with diabetes duration of ⩽1 year (77.3% versus 20.6%; χ2 = 797.534, p < 0.001), 1-5 years (82.5% versus 31.0%; χ2 = 197.124, p < 0.001) and ⩾5 years (86.9% versus 37.1%; χ2 = 475.609, p < 0.001).

Conclusions : In the case of limited medical resources, MMC can carry out one-stop examination, treatment, and management of DR through non-mydratic fundus examination and AI assistance, thus incorporating the DR screening process into the endocrine clinic, so as to facilitate early diagnosis.

Hao Zhaohu, Cui Shanshan, Zhu Yanjuan, Shao Hailin, Huang Xiao, Jiang Xia, Xu Rong, Chang Baocheng, Li Huanming

2020

artificial intelligence, diabetic retinopathy, fundus screening, non-mydratic fundus examination

General General

Infrared Spectrometry as a High-Throughput Phenotyping Technology to Predict Complex Traits in Livestock Systems.

In Frontiers in genetics ; h5-index 62.0

High-throughput phenotyping technologies are growing in importance in livestock systems due to their ability to generate real-time, non-invasive, and accurate animal-level information. Collecting such individual-level information can generate novel traits and potentially improve animal selection and management decisions in livestock operations. One of the most relevant tools used in the dairy and beef industry to predict complex traits is infrared spectrometry, which is based on the analysis of the interaction between electromagnetic radiation and matter. The infrared electromagnetic radiation spans an enormous range of wavelengths and frequencies known as the electromagnetic spectrum. The spectrum is divided into different regions, with near- and mid-infrared regions being the main spectral regions used in livestock applications. The advantage of using infrared spectrometry includes speed, non-destructive measurement, and great potential for on-line analysis. This paper aims to review the use of mid- and near-infrared spectrometry techniques as tools to predict complex dairy and beef phenotypes, such as milk composition, feed efficiency, methane emission, fertility, energy balance, health status, and meat quality traits. Although several research studies have used these technologies to predict a wide range of phenotypes, most of them are based on Partial Least Squares (PLS) and did not considered other machine learning (ML) techniques to improve prediction quality. Therefore, we will discuss the role of analytical methods employed on spectral data to improve the predictive ability for complex traits in livestock operations. Furthermore, we will discuss different approaches to reduce data dimensionality and the impact of validation strategies on predictive quality.

Bresolin Tiago, Dórea João R R

2020

beef cattle, dairy cattle, mid-infrared, near-infrared, novel phenotypes, spectral information

General General

The Molecular Basis of JAZ-MYC Coupling, a Protein-Protein Interface Essential for Plant Response to Stressors.

In Frontiers in plant science

The jasmonic acid (JA) signaling pathway is one of the primary mechanisms that allow plants to respond to a variety of biotic and abiotic stressors. Within this pathway, the JAZ repressor proteins and the basic helix-loop-helix (bHLH) transcription factor MYC3 play a critical role. JA is a volatile organic compound with an essential role in plant immunity. The increase in the concentration of JA leads to the decoupling of the JAZ repressor proteins and the bHLH transcription factor MYC3 causing the induction of genes of interest. The primary goal of this study was to identify the molecular basis of JAZ-MYC coupling. For this purpose, we modeled and validated 12 JAZ-MYC3 3D in silico structures and developed a molecular dynamics/machine learning pipeline to obtain two outcomes. First, we calculated the average free binding energy of JAZ-MYC3 complexes, which was predicted to be -10.94 +/-2.67 kJ/mol. Second, we predicted which ones should be the interface residues that make the predominant contribution to the free energy of binding (molecular hotspots). The predicted protein hotspots matched a conserved linear motif SL••FL•••R, which may have a crucial role during MYC3 recognition of JAZ proteins. As a proof of concept, we tested, both in silico and in vitro, the importance of this motif on PEAPOD (PPD) proteins, which also belong to the TIFY protein family, like the JAZ proteins, but cannot bind to MYC3. By mutating these proteins to match the SL••FL•••R motif, we could force PPDs to bind the MYC3 transcription factor. Taken together, modeling protein-protein interactions and using machine learning will help to find essential motifs and molecular mechanisms in the JA pathway.

Oña Chuquimarca Samara, Ayala-Ruano Sebastián, Goossens Jonas, Pauwels Laurens, Goossens Alain, Leon-Reyes Antonio, Ángel Méndez Miguel

2020

JAZ, MYC, computer, hotspots, machine learning, modeling, plant defense

General General

Contrasting Classical and Machine Learning Approaches in the Estimation of Value-Added Scores in Large-Scale Educational Data.

In Frontiers in psychology ; h5-index 92.0

There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these "imported" techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed.

Levy Jessica, Mussack Dominic, Brunner Martin, Keller Ulrich, Cardoso-Leite Pedro, Fischbach Antoine

2020

longitudinal data, machine learning, model comparison, school effectiveness, value-added modeling

General General

A Four-Step Method for the Development of an ADHD-VR Digital Game Diagnostic Tool Prototype for Children Using a DL Model.

In Frontiers in psychiatry

Attention-deficit/hyperactivity disorder (ADHD) is a common neurodevelopmental disorder among children resulting in disturbances in their daily functioning. Virtual reality (VR) and machine learning technologies, such as deep learning (DL) application, are promising diagnostic tools for ADHD in the near future because VR provides stimuli to replace real stimuli and recreate experiences with high realism. It also creates a playful virtual environment and reduces stress in children. The DL model is a subset of machine learning that can transform input and output data into diagnostic values using convolutional neural network systems. By using a sensitive and specific ADHD-VR diagnostic tool prototype for children with DL model, ADHD can be diagnosed more easily and accurately, especially in places with few mental health resources or where tele-consultation is possible. To date, several virtual reality-continuous performance test (VR-CPT) diagnostic tools have been developed for ADHD; however, they do not include a machine learning or deep learning application. A diagnostic tool development study needs a trustworthy and applicable study design and conduct to ensure the completeness and transparency of the report of the accuracy of the diagnostic tool. The proposed four-step method is a mixed-method research design that combines qualitative and quantitative approaches to reduce bias and collect essential information to ensure the trustworthiness and relevance of the study findings. Therefore, this study aimed to present a brief review of a ADHD-VR digital game diagnostic tool prototype with a DL model for children and the proposed four-step method for its development.

Wiguna Tjhin, Wigantara Ngurah Agung, Ismail Raden Irawati, Kaligis Fransiska, Minayati Kusuma, Bahana Raymond, Dirgantoro Bayu

2020

Indonesia, attention-deficit/hyperactivity disorder, diagnostic tool, digital game, machine learning, neuropsychological test, virtual reality

General General

Erratum: Addendum: Molecular Generation for Desired Transcriptome Changes With Adversarial Autoencoders.

In Frontiers in pharmacology

[This corrects the article .].

Shayakhmetov Rim, Kuznetsov Maksim, Zhebrak Alexander, Kadurin Artur, Nikolenko Sergey, Aliper Alexander, Polykovskiy Daniil

2020

adversarial autoencoders, conditional generation, deep learning, drug discovery, gene expression, generative models, representation learning

General General

Application of Artificial Intelligence in Early Diagnosis of Spontaneous Preterm Labor and Birth.

In Diagnostics (Basel, Switzerland)

This study reviews the current status and future prospective of knowledge on the use of artificial intelligence for the prediction of spontaneous preterm labor and birth ("preterm birth" hereafter). The summary of review suggests that different machine learning approaches would be optimal for different types of data regarding the prediction of preterm birth: the artificial neural network, logistic regression and/or the random forest for numeric data; the support vector machine for electrohysterogram data; the recurrent neural network for text data; and the convolutional neural network for image data. The ranges of performance measures were 0.79-0.94 for accuracy, 0.22-0.97 for sensitivity, 0.86-1.00 for specificity, and 0.54-0.83 for the area under the receiver operating characteristic curve. The following maternal variables were reported to be major determinants of preterm birth: delivery and pregestational body mass index, age, parity, predelivery systolic and diastolic blood pressure, twins, below high school graduation, infant sex, prior preterm birth, progesterone medication history, upper gastrointestinal tract symptom, gastroesophageal reflux disease, Helicobacter pylori, urban region, calcium channel blocker medication history, gestational diabetes mellitus, prior cone biopsy, cervical length, myomas and adenomyosis, insurance, marriage, religion, systemic lupus erythematosus, hydroxychloroquine sulfate, and increased cerebrospinal fluid and reduced cortical folding due to impaired brain growth.

Lee Kwang-Sig, Ahn Ki Hoon

2020-Sep-22

artificial intelligence, early diagnosis, preterm birth

oncology Oncology

Unveiling COVID-19 from CHEST X-Ray with Deep Learning: A Hurdles Race with Small Data.

In International journal of environmental research and public health ; h5-index 73.0

The possibility to use widespread and simple chest X-ray (CXR) imaging for early screening of COVID-19 patients is attracting much interest from both the clinical and the AI community. In this study we provide insights and also raise warnings on what is reasonable to expect by applying deep learning to COVID classification of CXR images. We provide a methodological guide and critical reading of an extensive set of statistical results that can be obtained using currently available datasets. In particular, we take the challenge posed by current small size COVID data and show how significant can be the bias introduced by transfer-learning using larger public non-COVID CXR datasets. We also contribute by providing results on a medium size COVID CXR dataset, just collected by one of the major emergency hospitals in Northern Italy during the peak of the COVID pandemic. These novel data allow us to contribute to validate the generalization capacity of preliminary results circulating in the scientific community. Our conclusions shed some light into the possibility to effectively discriminate COVID using CXR.

Tartaglione Enzo, Barbano Carlo Alberto, Berzovini Claudio, Calandri Marco, Grangetto Marco

2020-Sep-22

COVID-19, chest X-ray, classification, deep learning

General General

Neurophysiological and Genetic Findings in Patients With Juvenile Myoclonic Epilepsy.

In Frontiers in integrative neuroscience

Objective : Transcranial magnetic stimulation (TMS), a non-invasive procedure, stimulates the cortex evaluating the central motor pathways. The response is called motor evoked potential (MEP). Polyphasia results when the response crosses the baseline more than twice (zero crossing). Recent research shows MEP polyphasia in patients with generalized genetic epilepsy (GGE) and their first-degree relatives compared with controls. Juvenile Myoclonic Epilepsy (JME), a GGE type, is not well studied regarding polyphasia. In our study, we assessed polyphasia appearance probability with TMS in JME patients, their healthy first-degree relatives and controls. Two genetic approaches were applied to uncover genetic association with polyphasia.

Methods : 20 JME patients, 23 first-degree relatives and 30 controls underwent TMS, obtaining 10-15 MEPs per participant. We evaluated MEP mean number of phases, proportion of MEP trials displaying polyphasia for each subject and variability between groups. Participants underwent whole exome sequencing (WES) via trio-based analysis and two-case scenario. Extensive bioinformatics analysis was applied.

Results : We identified increased polyphasia in patients (85%) and relatives (70%) compared to controls (47%) and significantly higher mean number of zero crossings (i.e., occurrence of phases) (patients 1.49, relatives 1.46, controls 1.22; p < 0.05). Trio-based analysis revealed a candidate polymorphism, p.Glu270del,in SYT14 (Synaptotagmin 14), in JME patients and their relatives presenting polyphasia. Sanger sequencing analysis in remaining participants showed no significant association. In two-case scenario, a machine learning approach was applied in variants identified from odds ratio analysis and risk prediction scores were obtained for polyphasia. The results revealed 61 variants of which none was associated with polyphasia. Risk prediction scores indeed showed lower probability for non-polyphasic subjects on having polyphasia and higher probability for polyphasic subjects on having polyphasia.

Conclusion : Polyphasia was present in JME patients and relatives in contrast to controls. Although no known clinical symptoms are linked to polyphasia this neurophysiological phenomenon is likely due to common cerebral electrophysiological abnormality. We did not discover direct association between genetic variants obtained and polyphasia. It is likely these genetic traits alone cannot provoke polyphasia, however, this predisposition combined with disturbed brain-electrical activity and tendency to generate seizures may increase the risk of developing polyphasia, mainly in patients and relatives.

Stefani Stefani, Kousiappa Ioanna, Nicolaou Nicoletta, Papathanasiou Eleftherios S, Oulas Anastasis, Fanis Pavlos, Neocleous Vassos, Phylactou Leonidas A, Spyrou George M, Papacostas Savvas S

2020

genetics, juvenile myoclonic epilepsy, neurophysiology, polymorphism, polyphasia, transcranial magnetic stimulation, whole exome sequencing

General General

General Distributed Neural Control and Sensory Adaptation for Self-Organized Locomotion and Fast Adaptation to Damage of Walking Robots.

In Frontiers in neural circuits

Walking animals such as invertebrates can effectively perform self-organized and robust locomotion. They can also quickly adapt their gait to deal with injury or damage. Such a complex achievement is mainly performed via coordination between the legs, commonly known as interlimb coordination. Several components underlying the interlimb coordination process (like distributed neural control circuits, local sensory feedback, and body-environment interactions during movement) have been recently identified and applied to the control systems of walking robots. However, while the sensory pathways of biological systems are plastic and can be continuously readjusted (referred to as sensory adaptation), those implemented on robots are typically static. They first need to be manually adjusted or optimized offline to obtain stable locomotion. In this study, we introduce a fast learning mechanism for online sensory adaptation. It can continuously adjust the strength of sensory pathways, thereby introducing flexible plasticity into the connections between sensory feedback and neural control circuits. We combine the sensory adaptation mechanism with distributed neural control circuits to acquire the adaptive and robust interlimb coordination of walking robots. This novel approach is also general and flexible. It can automatically adapt to different walking robots and allow them to perform stable self-organized locomotion as well as quickly deal with damage within a few walking steps. The adaptation of plasticity after damage or injury is considered here as lesion-induced plasticity. We validated our adaptive interlimb coordination approach with continuous online sensory adaptation on simulated 4-, 6-, 8-, and 20-legged robots. This study not only proposes an adaptive neural control system for artificial walking systems but also offers a possibility of invertebrate nervous systems with flexible plasticity for locomotion and adaptation to injury.

Miguel-Blanco Aitor, Manoonpong Poramate

2020

central pattern generator, forward model, legged robot control, lesion-induced plasticity, neural circuits, serotonin, synaptic plasticity, walking machines

Pathology Pathology

MicroRNA signatures associated with lymph node metastasis in intramucosal gastric cancer.

In Modern pathology : an official journal of the United States and Canadian Academy of Pathology, Inc

Although a certain proportion of intramucosal carcinomas (IMCs) of the stomach does metastasize, the majority of patients are currently treated with endoscopic resection without lymph node dissection, and this potentially veils any existing metastasis and may put some patients in danger. In this regard, biological markers from the resected IMC that can predict metastasis are warranted. Here, we discovered unique miRNA expression profiles that consist of 21 distinct miRNAs that are specifically upregulated (miR-628-5p, miR-1587, miR-3175, miR-3620-5p, miR-4459, miR-4505, miR-4507, miR-4720-5p, miR-4742-5p, and miR-6779-5p) or downregulated (miR-106b-3p, miR-125a-5p, miR-151b, miR-181d-5p, miR-486-5p, miR-500a-3p, miR-502-3p, miR-1231, miR-3609, and miR-6831-5p) in metastatic (M)-IMC compared to nonmetastatic (N)-IMC, or nonneoplastic gastric mucosa. Intriguingly, most of these selected miRNAs showed stepwise increased or decreased expression from nonneoplastic tissue to N-IMC to M-IMC. This suggests that common oncogenic mechanisms are gradually intensified during the metastatic process. Using a machine-learning algorithm, we demonstrated that such miRNA signatures could distinguish M-IMC from N-IMC. Gene ontology and pathway analysis revealed that TGF-β signaling was enriched from upregulated miRNAs, whereas E2F targets, apoptosis-related, hypoxia-related, and PI3K/AKT/mTOR signaling pathways, were enriched from downregulated miRNAs. Immunohistochemical staining of samples from multiple institutions indicated that PI3K/AKT/mTOR pathway components, MAPK1, phospho-p44/42 MAPK, and pS6 were highly expressed and the expression of SMAD7, a TGF-β pathway component, was decreased in M-IMC, which could aid in distinguishing M-IMC from N-IMC. The miRNA signature discovered in this study is a valuable biological marker for identifying metastatic potential of IMCs, and provides novel insights regarding the metastatic progression of IMC.

Kim Seokhwi, Bae Won Jung, Ahn Ji Mi, Heo Jin-Hyung, Kim Kyoung-Mee, Choi Kyeong Woon, Sung Chang Ohk, Lee Dakeun

2020-Sep-24

oncology Oncology

2D and 3D convolutional neural networks for outcome modelling of locally advanced head and neck squamous cell carcinoma.

In Scientific reports ; h5-index 158.0

For treatment individualisation of patients with locally advanced head and neck squamous cell carcinoma (HNSCC) treated with primary radiochemotherapy, we explored the capabilities of different deep learning approaches for predicting loco-regional tumour control (LRC) from treatment-planning computed tomography images. Based on multicentre cohorts for exploration (206 patients) and independent validation (85 patients), multiple deep learning strategies including training of 3D- and 2D-convolutional neural networks (CNN) from scratch, transfer learning and extraction of deep autoencoder features were assessed and compared to a clinical model. Analyses were based on Cox proportional hazards regression and model performances were assessed by the concordance index (C-index) and the model's ability to stratify patients based on predicted hazards of LRC. Among all models, an ensemble of 3D-CNNs achieved the best performance (C-index 0.31) with a significant association to LRC on the independent validation cohort. It performed better than the clinical model including the tumour volume (C-index 0.39). Significant differences in LRC were observed between patient groups at low or high risk of tumour recurrence as predicted by the model ([Formula: see text]). This 3D-CNN ensemble will be further evaluated in a currently ongoing prospective validation study once follow-up is complete.

Starke Sebastian, Leger Stefan, Zwanenburg Alex, Leger Karoline, Lohaus Fabian, Linge Annett, Schreiber Andreas, Kalinauskaite Goda, Tinhofer Inge, Guberina Nika, Guberina Maja, Balermpas Panagiotis, von der Grün Jens, Ganswindt Ute, Belka Claus, Peeken Jan C, Combs Stephanie E, Boeke Simon, Zips Daniel, Richter Christian, Troost Esther G C, Krause Mechthild, Baumann Michael, Löck Steffen

2020-Sep-24

Surgery Surgery

Automated rotator cuff tear classification using 3D convolutional neural network.

In Scientific reports ; h5-index 158.0

Rotator cuff tear (RCT) is one of the most common shoulder injuries. When diagnosing RCT, skilled orthopedists visually interpret magnetic resonance imaging (MRI) scan data. For automated and accurate diagnosis of RCT, we propose a full 3D convolutional neural network (CNN) based method using deep learning. This 3D CNN automatically diagnoses the presence or absence of an RCT, classifies the tear size, and provides 3D visualization of the tear location. To train the 3D CNN, the Voxception-ResNet (VRN) structure was used. This architecture uses 3D convolution filters, so it is advantageous in extracting information from 3D data compared with 2D-based CNNs or traditional diagnosis methods. MRI data from 2,124 patients were used to train and test the VRN-based 3D CNN. The network is trained to classify RCT into five classes (None, Partial, Small, Medium, Large-to-Massive). A 3D class activation map (CAM) was visualized by volume rendering to show the localization and size information of RCT in 3D. A comparative experiment was performed for the proposed method and clinical experts by using randomly selected 200 test set data, which had been separated from training set. The VRN-based 3D CNN outperformed orthopedists specialized in shoulder and general orthopedists in binary accuracy (92.5% vs. 76.4% and 68.2%), top-1 accuracy (69.0% vs. 45.8% and 30.5%), top-1±1 accuracy (87.5% vs. 79.8% and 71.0%), sensitivity (0.94 vs. 0.86 and 0.90), and specificity (0.90 vs. 0.58 and 0.29). The generated 3D CAM provided effective information regarding the 3D location and size of the tear. Given these results, the proposed method demonstrates the feasibility of artificial intelligence that can assist in clinical RCT diagnosis.

Shim Eungjune, Kim Joon Yub, Yoon Jong Pil, Ki Se-Young, Lho Taewoo, Kim Youngjun, Chung Seok Won

2020-Sep-24

General General

Machine learning identifies scale-free properties in disordered materials.

In Nature communications ; h5-index 260.0

The vast amount of design freedom in disordered systems expands the parameter space for signal processing. However, this large degree of freedom has hindered the deterministic design of disordered systems for target functionalities. Here, we employ a machine learning approach for predicting and designing wave-matter interactions in disordered structures, thereby identifying scale-free properties for waves. To abstract and map the features of wave behaviors and disordered structures, we develop disorder-to-localization and localization-to-disorder convolutional neural networks, each of which enables the instantaneous prediction of wave localization in disordered structures and the instantaneous generation of disordered structures from given localizations. We demonstrate that the structural properties of the network architectures lead to the identification of scale-free disordered structures having heavy-tailed distributions, thus achieving multiple orders of magnitude improvement in robustness to accidental defects. Our results verify the critical role of neural network structures in determining machine-learning-generated real-space structures and their defect immunity.

Yu Sunkyu, Piao Xianji, Park Namkyoo

2020-09-24

Radiology Radiology

Rapid vessel segmentation and reconstruction of head and neck angiograms using 3D convolutional neural network.

In Nature communications ; h5-index 260.0

The computed tomography angiography (CTA) postprocessing manually recognized by technologists is extremely labor intensive and error prone. We propose an artificial intelligence reconstruction system supported by an optimized physiological anatomical-based 3D convolutional neural network that can automatically achieve CTA reconstruction in healthcare services. This system is trained and tested with 18,766 head and neck CTA scans from 5 tertiary hospitals in China collected between June 2017 and November 2018. The overall reconstruction accuracy of the independent testing dataset is 0.931. It is clinically applicable due to its consistency with manually processed images, which achieves a qualification rate of 92.1%. This system reduces the time consumed from 14.22 ± 3.64 min to 4.94 ± 0.36 min, the number of clicks from 115.87 ± 25.9 to 4 and the labor force from 3 to 1 technologist after five months application. Thus, the system facilitates clinical workflows and provides an opportunity for clinical technologists to improve humanistic patient care.

Fu Fan, Wei Jianyong, Zhang Miao, Yu Fan, Xiao Yueting, Rong Dongdong, Shan Yi, Li Yan, Zhao Cheng, Liao Fangzhou, Yang Zhenghan, Li Yuehua, Chen Yingmin, Wang Ximing, Lu Jie

2020-09-24

General General

Identification of determinants of differential chromatin accessibility through a massively parallel genome-integrated reporter assay.

In Genome research ; h5-index 99.0

A key mechanism in cellular regulation is the ability of the transcriptional machinery to physically access DNA. Transcription factors interact with DNA to alter the accessibility of chromatin, which enables changes to gene expression during development or disease or as a response to environmental stimuli. However, the regulation of DNA accessibility via the recruitment of transcription factors is difficult to study in the context of the native genome because every genomic site is distinct in multiple ways. Here we introduce the multiplexed integrated accessibility assay (MIAA), an assay that measures chromatin accessibility of synthetic oligonucleotide sequence libraries integrated into a controlled genomic context with low native accessibility. We apply MIAA to measure the effects of sequence motifs on cell type-specific accessibility between mouse embryonic stem cells and embryonic stem cell-derived definitive endoderm cells, screening 7905 distinct DNA sequences. MIAA recapitulates differential accessibility patterns of 100-nt sequences derived from natively differential genomic regions, identifying E-box motifs common to epithelial-mesenchymal transition driver transcription factors in stem cell-specific accessible regions that become repressed in endoderm. We show that a single binding motif for a key regulatory transcription factor is sufficient to open chromatin, and classify sets of stem cell-specific, endoderm-specific, and shared accessibility-modifying transcription factor motifs. We also show that overexpression of two definitive endoderm transcription factors, T and Foxa2, results in changes to accessibility in DNA sequences containing their respective DNA-binding motifs and identify preferential motif arrangements that influence accessibility.

Hammelman Jennifer, Krismer Konstantin, Banerjee Budhaditya, Gifford David K, Sherwood Richard I

2020-Sep-24

Radiology Radiology

Musculoskeletal trauma imaging in the era of novel molecular methods and artificial intelligence.

In Injury ; h5-index 49.0

Over the past decade rapid advancements in molecular imaging (MI) and artificial intelligence (AI) have revolutionized traditional musculoskeletal radiology. Molecular imaging refers to the ability of various methods to in vivo characterize and quantify biological processes, at a molecular level. The extracted information provides the tools to understand the pathophysiology of diseases and thus to early detect, to accurately evaluate the extend and to apply and evaluate targeted treatments. At present, molecular imaging mainly involves CT, MRI, radionuclide, US, and optical imaging and has been reported in many clinical and preclinical studies. Although originally MI techniques targeted at central nervous system disorders, later on their value on musculoskeletal disorders was also studied in depth. Meaningful exploitation of the large volume of imaging data generated by molecular and conventional imaging techniques, requires state-of-the-art computational methods that enable rapid handling of large volumes of information. AI allows end-to-end training of computer algorithms to perform tasks encountered in everyday clinical practice including diagnosis, disease severity classification and image optimization. Notably, the development of deep learning algorithms has offered novel methods that enable intelligent processing of large imaging datasets in an attempt to automate decision-making in a wide variety of settings related to musculoskeletal trauma. Current applications of AI include the diagnosis of bone and soft tissue injuries, monitoring of the healing process and prediction of injuries in the professional sports setting. This review presents the current applications of novel MI techniques and methods and the emerging role of AI regarding the diagnosis and evaluation of musculoskeletal trauma.

Klontzas Michail E, Papadakis Georgios Z, Marias Kostas, Karantanas Apostolos H

2020-Sep-16

Artificial intelligence, Deep learning, Hybrid positron-emission Tomography/MR imaging, Magnetic resonance imaging, Musculoskeletal system/injuries, Neural networks

Surgery Surgery

Rule-based automatic diagnosis of thyroid nodules from intraoperative frozen sections using deep learning.

In Artificial intelligence in medicine ; h5-index 34.0

Frozen sections provide a basis for rapid intraoperative diagnosis that can guide surgery, but the diagnoses often challenge pathologists. Here we propose a rule-based system to differentiate thyroid nodules from intraoperative frozen sections using deep learning techniques. The proposed system consists of three components: (1) automatically locating tissue regions in the whole slide images (WSIs), (2) splitting located tissue regions into patches and classifying each patch into predefined categories using convolutional neural networks (CNN), and (3) integrating predictions of all patches to form the final diagnosis with a rule-based system. To be specific, we fine-tune the InceptionV3 model for thyroid patch classification by replacing the last fully connected layer with three outputs representing the patch's probabilities of being benign, uncertain, or malignant. Moreover, we design a rule-based protocol to integrate patches' predictions to form the final diagnosis, which provides interpretability for the proposed system. On 259 testing slides, the system correctly predicts 95.3% (61/64) of benign nodules and 96.7% (148/153) of malignant nodules, and classify 16.2% (42/259) slides as uncertain, including 19 benign and 16 malignant slides, which are a sufficiently small number to be manually examined by pathologists or fully processed through permanent sections. Besides, the system allows the localization of suspicious regions along with the diagnosis. A typical whole slide image, with 80, 000 × 60, 000 pixels, can be diagnosed within 1 min, thus satisfying the time requirement for intraoperative diagnosis. To the best of our knowledge, this is the first study to apply deep learning to diagnose thyroid nodules from intraoperative frozen sections. The code is released at https://github.com/PingjunChen/ThyroidRule.

Li Yuan, Chen Pingjun, Li Zhiyuan, Su Hai, Yang Lin, Zhong Dingrong

2020-Aug

Deep learning, Frozen section, Rule-based protocol, Thyroid nodule, Whole slide image

Radiology Radiology

Imaging Diagnostics and Pathology in SARS-CoV-2-Related Diseases.

In International journal of molecular sciences ; h5-index 102.0

In December 2019, physicians reported numerous patients showing pneumonia of unknown origin in the Chinese region of Wuhan. Following the spreading of the infection over the world, The World Health Organization (WHO) on 11 March 2020 declared the novel severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) outbreak a global pandemic. The scientific community is exerting an extraordinary effort to elucidate all aspects related to SARS-CoV-2, such as the structure, ultrastructure, invasion mechanisms, replication mechanisms, or drugs for treatment, mainly through in vitro studies. Thus, the clinical in vivo data can provide a test bench for new discoveries in the field of SARS-CoV-2, finding new solutions to fight the current pandemic. During this dramatic situation, the normal scientific protocols for the development of new diagnostic procedures or drugs are frequently not completely applied in order to speed up these processes. In this context, interdisciplinarity is fundamental. Specifically, a great contribution can be provided by the association and interpretation of data derived from medical disciplines based on the study of images, such as radiology, nuclear medicine, and pathology. Therefore, here, we highlighted the most recent histopathological and imaging data concerning the SARS-CoV-2 infection in lung and other human organs such as the kidney, heart, and vascular system. In addition, we evaluated the possible matches among data of radiology, nuclear medicine, and pathology departments in order to support the intense scientific work to address the SARS-CoV-2 pandemic. In this regard, the development of artificial intelligence algorithms that are capable of correlating these clinical data with the new scientific discoveries concerning SARS-CoV-2 might be the keystone to get out of the pandemic.

Scimeca Manuel, Urbano Nicoletta, Bonfiglio Rita, Montanaro Manuela, Bonanno Elena, Schillaci Orazio, Mauriello Alessandro

2020-Sep-22

SARS-CoV-2, artificial intelligence, imaging diagnostic, pandemic, pathology

Radiology Radiology

Imaging Diagnostics and Pathology in SARS-CoV-2-Related Diseases.

In International journal of molecular sciences ; h5-index 102.0

In December 2019, physicians reported numerous patients showing pneumonia of unknown origin in the Chinese region of Wuhan. Following the spreading of the infection over the world, The World Health Organization (WHO) on 11 March 2020 declared the novel severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) outbreak a global pandemic. The scientific community is exerting an extraordinary effort to elucidate all aspects related to SARS-CoV-2, such as the structure, ultrastructure, invasion mechanisms, replication mechanisms, or drugs for treatment, mainly through in vitro studies. Thus, the clinical in vivo data can provide a test bench for new discoveries in the field of SARS-CoV-2, finding new solutions to fight the current pandemic. During this dramatic situation, the normal scientific protocols for the development of new diagnostic procedures or drugs are frequently not completely applied in order to speed up these processes. In this context, interdisciplinarity is fundamental. Specifically, a great contribution can be provided by the association and interpretation of data derived from medical disciplines based on the study of images, such as radiology, nuclear medicine, and pathology. Therefore, here, we highlighted the most recent histopathological and imaging data concerning the SARS-CoV-2 infection in lung and other human organs such as the kidney, heart, and vascular system. In addition, we evaluated the possible matches among data of radiology, nuclear medicine, and pathology departments in order to support the intense scientific work to address the SARS-CoV-2 pandemic. In this regard, the development of artificial intelligence algorithms that are capable of correlating these clinical data with the new scientific discoveries concerning SARS-CoV-2 might be the keystone to get out of the pandemic.

Scimeca Manuel, Urbano Nicoletta, Bonfiglio Rita, Montanaro Manuela, Bonanno Elena, Schillaci Orazio, Mauriello Alessandro

2020-Sep-22

SARS-CoV-2, artificial intelligence, imaging diagnostic, pandemic, pathology

General General

A supervised machine learning-based methodology for analyzing dysregulation in splicing machinery: An application in cancer diagnosis.

In Artificial intelligence in medicine ; h5-index 34.0

Deregulated splicing machinery components have shown to be associated with the development of several types of cancer and, therefore, the determination of such alterations can help the development of tumor-specific molecular targets for early prognosis and therapy. Determining such splicing components, however, is not a straightforward task mainly due to the heterogeneity of tumors, the variability across samples, and the fat-short characteristic of genomic datasets. In this work, a supervised machine learning-based methodology is proposed, allowing the determination of subsets of relevant splicing components that best discriminate samples. The methodology comprises three main phases: first, a ranking of features is determined by means of applying feature weighting algorithms that compute the importance of each splicing component; second, the best subset of features that allows the induction of an accurate classifier is determined by means of conducting an effective heuristic search; then the confidence over the induced classifier is assessed by means of explaining the individual predictions and its global behavior. At the end, an extensive experimental study was conducted on a large collection of transcript-based datasets, illustrating the utility and benefit of the proposed methodology for analyzing dysregulation in splicing machinery.

Reyes Oscar, Pérez Eduardo, Luque Raúl M, Castaño Justo, Ventura Sebastián

2020-Aug

Alternative Splicing, Classification methods, Explaining classifier’s predictions, Feature weighting methods, Transcript-based analysis

General General

EBM+: Advancing Evidence-Based Medicine via two level automatic identification of Populations, Interventions, Outcomes in medical literature.

In Artificial intelligence in medicine ; h5-index 34.0

Evidence-Based Medicine (EBM) has been an important practice for medical practitioners. However, as the number of medical publications increases dramatically, it is becoming extremely difficult for medical experts to review all the contents available and make an informative treatment plan for their patients. A variety of frameworks, including the PICO framework which is named after its elements (Population, Intervention, Comparison, Outcome), have been developed to enable fine-grained searches, as the first step to faster decision making. In this work, we propose a novel entity recognition system that identifies PICO entities within medical publications and achieves state-of-the-art performance in the task. This is achieved by the combination of four 2D Convolutional Neural Networks (CNNs) for character feature extraction, and a Highway Residual connection to facilitate deep Neural Network architectures. We further introduce a PICO Statement classifier, that identifies sentences that not only contain all PICO entities but also answer questions stated in PICO. To facilitate this task we also introduce a high quality dataset, manually annotated by medical practitioners. With the combination of our proposed PICO Entity Recognizer and PICO Statement classifier we aim to advance EBM and enable its faster and more accurate practice.

Stylianou Nikolaos, Razis Gerasimos, Goulis Dimitrios G, Vlahavas Ioannis

2020-Aug

Evidence Based Medicine, Machine learning, Natural Language Processing, Neural networks, PICO

Radiology Radiology

Handling imbalanced medical image data: A deep-learning-based one-class classification approach.

In Artificial intelligence in medicine ; h5-index 34.0

In clinical settings, a lot of medical image datasets suffer from the imbalance problem which hampers the detection of outliers (rare health care events), as most classification methods assume an equal occurrence of classes. In this way, identifying outliers in imbalanced datasets has become a crucial issue. To help address this challenge, one-class classification, which focuses on learning a model using samples from only a single given class, has attracted increasing attention. Previous one-class modeling usually uses feature mapping or feature fitting to enforce the feature learning process. However, these methods are limited for medical images which usually have complex features. In this paper, a novel method is proposed to enable deep learning models to optimally learn single-class-relevant inherent imaging features by leveraging the concept of imaging complexity. We investigate and compare the effects of simple but effective perturbing operations applied to images to capture imaging complexity and to enhance feature learning. Extensive experiments are performed on four clinical datasets to show that the proposed method outperforms four state-of-the-art methods.

Gao Long, Zhang Lei, Liu Chang, Wu Shandong

2020-Aug

Data imbalance, Deep learning, Image complexity, Medical image classification

General General

Variable step dynamic threshold local binary pattern for classification of atrial fibrillation.

In Artificial intelligence in medicine ; h5-index 34.0

OBJECTIVE : In this paper, we proposed new methods for feature extraction in machine learning-based classification of atrial fibrillation from ECG signal.

METHODS : Our proposed methods improved conventional 1-dimensional local binary pattern method in two ways. First, we proposed a dynamic threshold LBP code generation method for use with 1-dimensional signals, enabling the generated LBP codes to have a more detailed representation of the signal morphological pattern. Second, we introduced a variable step value into the LBP code generation algorithm to better cope with a high sampling frequency input signal without a downsampling process. The proposed methods do not employ computationally expensive processes such as filtering, wavelet transform, up/downsampling, or beat detection, and can be implemented using only simple addition, division, and compare operations.

RESULTS : Combining these two approaches, our proposed variable step dynamic threshold local binary pattern method achieved 99.11% sensitivity and 99.29% specificity when used as a feature generation algorithm in support vector machine classification of atrial fibrillation from MIT-BIH Atrial Fibrillation Database dataset. When applied on signals from MIT-BIH Arrhythmia Database, our proposed method achieved similarly good 99.38% sensitivity and 98.97% specificity.

CONCLUSION : Our proposed methods achieved one of the best results among published works in atrial fibrillation classification using the same dataset while using less computationally expensive calculations, without significant performance degradation when applied on signals from multiple databases with different sampling frequencies.

Yazid Muhammad, Abdur Rahman Mahrus

2020-Aug

AFDB, Atrial fibrillation, Dynamic threshold, Local binary pattern, MITDB, Variable step

General General

Indoor location identification of patients for directing virtual care: An AI approach using machine learning and knowledge-based methods.

In Artificial intelligence in medicine ; h5-index 34.0

In a digitally enabled healthcare setting, we posit that an individual's current location is pivotal for supporting many virtual care services-such as tailoring educational content towards an individual's current location, and, hence, current stage in an acute care process; improving activity recognition for supporting self-management in a home-based setting; and guiding individuals with cognitive decline through daily activities in their home. However, unobtrusively estimating an individual's indoor location in real-world care settings is still a challenging problem. Moreover, the needs of location-specific care interventions go beyond absolute coordinates and require the individual's discrete semantic location; i.e., it is the concrete type of an individual's location (e.g., exam vs. waiting room; bathroom vs. kitchen) that will drive the tailoring of educational content or recognition of activities. We utilized Machine Learning methods to accurately identify an individual's discrete location, together with knowledge-based models and tools to supply the associated semantics of identified locations. We considered clustering solutions to improve localization accuracy at the expense of granularity; and investigate sensor fusion-based heuristics to rule out false location estimates. We present an AI-driven indoor localization approach that integrates both data-driven and knowledge-based processes and artifacts. We illustrate the application of our approach in two compelling healthcare use cases, and empirically validated our localization approach at the emergency unit of a large Canadian pediatric hospital.

Van Woensel William, Roy Patrice C, Abidi Syed Sibte Raza, Abidi Samina Raza

2020-Aug

Activities of daily living, Ambient assisted living, Ambient sensors, Data fusion, Indoor localization, Machine learning, Self-management, Semantic web, Virtual care, eHealth platform

Pathology Pathology

Using topological data analysis and pseudo time series to infer temporal phenotypes from electronic health records.

In Artificial intelligence in medicine ; h5-index 34.0

Temporal phenotyping enables clinicians to better understand observable characteristics of a disease as it progresses. Modelling disease progression that captures interactions between phenotypes is inherently challenging. Temporal models that capture change in disease over time can identify the key features that characterize disease subtypes that underpin these trajectories. These models will enable clinicians to identify early warning signs of progression in specific sub-types and therefore to make informed decisions tailored to individual patients. In this paper, we explore two approaches to building temporal phenotypes based on the topology of data: topological data analysis and pseudo time-series. Using type 2 diabetes data, we show that the topological data analysis approach is able to identify disease trajectories and that pseudo time-series can infer a state space model characterized by transitions between hidden states that represent distinct temporal phenotypes. Both approaches highlight lipid profiles as key factors in distinguishing the phenotypes.

Dagliati Arianna, Geifman Nophar, Peek Niels, Holmes John H, Sacchi Lucia, Bellazzi Riccardo, Sajjadi Seyed Erfan, Tucker Allan

2020-Aug

Electronic phenotyping, Longitudinal studies, Type 2 diabetes, Unsupervised machine learning

Surgery Surgery

Deep learning to find colorectal polyps in colonoscopy: A systematic literature review.

In Artificial intelligence in medicine ; h5-index 34.0

Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.

Sánchez-Peralta Luisa F, Bote-Curiel Luis, Picón Artzai, Sánchez-Margallo Francisco M, Pagador J Blas

2020-Aug

Colorectal cancer, Deep learning, Detection, Localization, Segmentation

General General

Continuous blood pressure measurement from one-channel electrocardiogram signal using deep-learning techniques.

In Artificial intelligence in medicine ; h5-index 34.0

Continuous blood pressure (BP) measurement is crucial for reliable and timely hypertension detection. State-of-the-art continuous BP measurement methods based on pulse transit time or multiple parameters require simultaneous electrocardiogram (ECG) and photoplethysmogram (PPG) signals. Compared with PPG signals, ECG signals are easy to collect using wearable devices. This study examined a novel continuous BP estimation approach using one-channel ECG signals for unobtrusive BP monitoring. A BP model is developed based on the fusion of a residual network and long short-term memory to obtain the spatial-temporal information of ECG signals. The public multiparameter intelligent monitoring waveform database, which contains ECG, PPG, and invasive BP data of patients in intensive care units, is used to develop and verify the model. Experimental results demonstrated that the proposed approach exhibited an estimation error of 0.07 ± 7.77 mmHg for mean arterial pressure (MAP) and 0.01 ± 6.29 for diastolic BP (DBP), which comply with the Association for the Advancement of Medical Instrumentation standard. According to the British Hypertension Society standards, the results achieved grade A for MAP and DBP estimation and grade B for systolic BP (SBP) estimation. Furthermore, we verified the model with an independent dataset for arrhythmia patients. The experimental results exhibited an estimation error of -0.22 ± 5.82 mmHg, -0.57 ± 4.39 mmHg, and -0.75 ± 5.62 mmHg for SBP, MAP, and DBP measurements, respectively. These results indicate the feasibility of estimating BP by using a one-channel ECG signal, thus enabling continuous BP measurement for ubiquitous health care applications.

Miao Fen, Wen Bo, Hu Zhejing, Fortino Giancarlo, Wang Xi-Ping, Liu Zeng-Ding, Tang Min, Li Ye

2020-Aug

Blood pressure, ECG, Long short-term memory, Residual network

General General

Enhancing Quality of Patients Care and Improving Patient Experience in China with Assistance of Artificial Intelligence.

In Chinese medical sciences journal = Chung-kuo i hsueh k'o hsueh tsa chih

Improving health of Chinese people has become national strategy according to the Healthy China 2030. Patient experience evaluation examines health care service from perspective of patients; it is important for improving health care quality. Applying artificial intelligence (AI) in patient experience is an innovative approach to assist continuous improvement of care quality of patient service. A nursing quality platform based on patient experience data which is empowered by AI technologies has been established in China for the purpose of surveillance and analysis of the quality of patient care. It contains data from nearly 1300 healthcare facilities, based on which portraits of nursing service qualities can be drawn. The patient experience big data platform has shown potentials for healthcare facilities to improve patient care quality. More efforts are needed to achieve the goal of enhancing people's sense of health gain.

Wang Zheng, Zhao Qing Hua, Yang Jing Lin, Zhou Feng

2020-Sep-30

Radiology Radiology

Publisher Correction to: Deep learning-based image analysis methods for brightfield-acquired multiplex immunohistochemistry images.

In Diagnostic pathology ; h5-index 35.0

An amendment to this paper has been published and can be accessed via the original article.

Fassler Danielle J, Abousamra Shahira, Gupta Rajarsi, Chen Chao, Zhao Maozheng, Paredes David, Batool Syeda Areeha, Knudsen Beatrice S, Escobar-Hoyos Luisa, Shroyer Kenneth R, Samaras Dimitris, Kurc Tahsin, Saltz Joel

2020-Sep-24

General General

Hourly 5-km surface total and diffuse solar radiation in China, 2007-2018.

In Scientific data

Surface solar radiation is an indispensable parameter for numerical models, and the diffuse component contributes to the carbon uptake in ecosystems. We generated a 12-year (2007-2018) hourly dataset from Multi-functional Transport Satellite (MTSAT) satellite observations, including surface total solar radiation (Rs) and diffuse radiation (Rdif), with 5-km spatial resolution through deep learning techniques. The used deep network tacks the integration of spatial pattern and the simulation of complex radiation transfer by combining convolutional neural network and multi-layer perceptron. Validation against ground measurements shows the correlation coefficient, mean bias error and root mean square error are 0.94, 2.48 W/m2 and 89.75 W/m2 for hourly Rs and 0.85, 8.63 W/m2 and 66.14 W/m2 for hourly Rdif, respectively. The correlation coefficient of Rs and Rdif increases to 0.94 (0.96) and 0.89 (0.92) at daily (monthly) scales, respectively. The spatially continuous hourly maps accurately reflect regional differences and restore the diurnal cycles of solar radiation at fine resolution. This dataset can be valuable for studies on regional climate changes, terrestrial ecosystem simulations and photovoltaic applications.

Jiang Hou, Lu Ning, Qin Jun, Yao Ling

2020-09-23

General General

Preparing to adapt is key for Olympic curling robots.

In Science robotics

Continued advances in machine learning could enable robots to solve tasks on a human level and adapt to changing conditions.

Stork Johannes A

2020-Sep-23

General General

Pandemic number five - Latest insights into the COVID-19 crisis.

In Biomedical journal

About nine months after the emergence of SARS-CoV-2, this special issue of the Biomedical Journal takes stock of its evolution into a pandemic. We acquire an elaborate overview of the history and virology of SARS-CoV-2, the epidemiology of COVID-19, and the development of therapies and vaccines, based on useful tools such as a pseudovirus system, artificial intelligence, and repurposing of existing drugs. Moreover, we learn about a potential link between COVID-19 and oral health, and some of the strategies that allowed Taiwan to handle the outbreak exceptionally well, including a COVID-19 biobank establishment, online tools for contact tracing, and the efficient management of emergency departments.

Häfner Sophia Julia

2020-Aug-27

COVID-19, Contact tracing, Pseudovirus system, Repurposing drugs, SARS-CoV-2

General General

Deep learning-enabled analysis reveals distinct neuronal phenotypes induced by aging and cold-shock.

In BMC biology

BACKGROUND : Access to quantitative information is crucial to obtain a deeper understanding of biological systems. In addition to being low-throughput, traditional image-based analysis is mostly limited to error-prone qualitative or semi-quantitative assessment of phenotypes, particularly for complex subcellular morphologies. The PVD neuron in Caenorhabditis elegans, which is responsible for harsh touch and thermosensation, undergoes structural degeneration as nematodes age characterized by the appearance of dendritic protrusions. Analysis of these neurodegenerative patterns is labor-intensive and limited to qualitative assessment.

RESULTS : In this work, we apply deep learning to perform quantitative image-based analysis of complex neurodegeneration patterns exhibited by the PVD neuron in C. elegans. We apply a convolutional neural network algorithm (Mask R-CNN) to identify neurodegenerative subcellular protrusions that appear after cold-shock or as a result of aging. A multiparametric phenotypic profile captures the unique morphological changes induced by each perturbation. We identify that acute cold-shock-induced neurodegeneration is reversible and depends on rearing temperature and, importantly, that aging and cold-shock induce distinct neuronal beading patterns.

CONCLUSION : The results of this work indicate that implementing deep learning for challenging image segmentation of PVD neurodegeneration enables quantitatively tracking subtle morphological changes in an unbiased manner. This analysis revealed that distinct patterns of morphological alteration are induced by aging and cold-shock, suggesting different mechanisms at play. This approach can be used to identify the molecular components involved in orchestrating neurodegeneration and to characterize the effect of other stressors on PVD degeneration.

Saberi-Bosari Sahand, Flores Kevin B, San-Miguel Adriana

2020-Sep-23

Aging, C. elegans, Convolutional neural networks, Deep learning, Machine learning, Neurodegeneration, Neuronal beading, Phenotyping

Radiology Radiology

Differential diagnosis and mutation stratification of desmoid-type fibromatosis on MRI using radiomics.

In European journal of radiology ; h5-index 47.0

PURPOSE : Diagnosing desmoid-type fibromatosis (DTF) requires an invasive tissue biopsy with β-catenin staining and CTNNB1 mutational analysis, and is challenging due to its rarity. The aim of this study was to evaluate radiomics for distinguishing DTF from soft tissue sarcomas (STS), and in DTF, for predicting the CTNNB1 mutation types.

METHODS : Patients with histologically confirmed extremity STS (non-DTF) or DTF and at least a pretreatment T1-weighted (T1w) MRI scan were retrospectively included. Tumors were semi-automatically annotated on the T1w scans, from which 411 features were extracted. Prediction models were created using a combination of various machine learning approaches. Evaluation was performed through a 100x random-split cross-validation. The model for DTF vs. non-DTF was compared to classification by two radiologists on a location matched subset.

RESULTS : The data included 203 patients (72 DTF, 131 STS). The T1w radiomics model showed a mean AUC of 0.79 on the full dataset. Addition of T2w or T1w post-contrast scans did not improve the performance. On the location matched cohort, the T1w model had a mean AUC of 0.88 while the radiologists had an AUC of 0.80 and 0.88, respectively. For the prediction of the CTNNB1 mutation types (S45 F, T41A and wild-type), the T1w model showed an AUC of 0.61, 0.56, and 0.74.

CONCLUSIONS : Our radiomics model was able to distinguish DTF from STS with high accuracy similar to two radiologists, but was not able to predict the CTNNB1 mutation status.

Timbergen Milea J M, Starmans Martijn P A, Padmos Guillaume A, Grünhagen Dirk J, van Leenders Geert J L H, Hanff D F, Verhoef Cornelis, Niessen Wiro J, Sleijfer Stefan, Klein Stefan, Visser Jacob J

2020-Sep-08

Aggressive, Beta catenin, Fibromatosis, Machine learning, Magnetic resonance imaging, Radiomics

General General

Causal conflicts produce domino effects.

In Quarterly journal of experimental psychology (2006)

Inconsistent beliefs call for revision-but which of them should individuals revise? A long-standing view is that they should make minimal changes that restore consistency. An alternative view is that their primary task is to explain how the inconsistency arose. Hence, they are likely to violate minimalism in two ways: they should infer more information than is strictly necessary to establish consistency and they should reject more information than is strictly necessary to establish consistency. Previous studies corroborated the first effect: reasoners use causal simulations to build explanations that resolve inconsistencies. Here, we show that the second effect is true too: they use causal simulations to reject more information than is strictly necessary to establish consistency. When they abandon a cause, the effects of the cause topple like dominos: Reasoners tend to deny the occurrence of each subsequent event in the chain. Four studies corroborated this prediction.

Khemlani Sangeet, Johnson-Laird P N

2020-Sep-23

Inconsistency, bridging inferences, causal reasoning, domino effects, mental models, minimalism

General General

Is Fidgety Philip's ground truth also ours? The creation and application of a machine learning algorithm.

In Journal of psychiatric research ; h5-index 59.0

BACKGROUND : Behavioral observations support clinical in-depth phenotyping but phenotyping and pattern recognition are affected by training background. As Attention Deficit Hyperactivity Disorder, Restless Legs syndrome/Willis Ekbom disease and medication induced activation syndromes (including increased irritability and/or akathisia), present with hyperactive-behaviors with hyper-arousability and/or hypermotor-restlessness (H-behaviors), we first developed a non-interpretative, neutral pictogram-guided phenotyping language (PG-PL) for describing body-segment movements during sitting (Data in Brief).

METHODOLOGY & RESULTS : The PG-PL was applied for annotating 12 1-min sitting-videos (inter-observer agreements >85%->97%) and these manual annotations were used as a ground truth to develop an automated algorithm using OpenPose, which locates skeletal landmarks in 2D video. We evaluated the algorithm's performance against the ground truth by computing the area under the receiver operator curve (>0.79 for the legs, arms, and feet, but 0.65 for the head). While our pixel displacement algorithm performed well for the legs, arms, and feet, it predicted head motion less well, indicating the need for further investigations.

CONCLUSION : This first automated analysis algorithm allows to start the discussion about distinct phenotypical characteristics of H-behaviors during structured behavioral observations and may support differential diagnostic considerations via in-depth phenotyping of sitting behaviors and, in consequence, of better treatment concepts.

Beyzaei Nadia, Bao Seraph, Bu Yanyun, Hung Linus, Hussaina Hebah, Maher Khaola Safia, Chan Melvin, Garn Heinrich, Kloesch Gerhard, Kohn Bernhard, Kuzeljevic Boris, McWilliams Scout, Spruyt Karen, Tse Emmanuel, Machiel Van der Loos Hendrik F, Kuo Calvin, Ipsiroglu Osman S

2020-Aug-29

Adverse drug reactions, Misdiagnosis, Movement disorders, Over-medication, Sleep-related movement disorders

Radiology Radiology

Approximating anatomically-guided PET reconstruction in image space using a convolutional neural network.

In NeuroImage ; h5-index 117.0

In the last two decades, it has been shown that anatomically-guided PET reconstruction can lead to improved bias-noise characteristics in brain PET imaging. However, despite promising results in simulations and first studies, anatomically-guided PET reconstructions are not yet available for use in routine clinical because of several reasons. In light of this, we investigate whether the improvements of anatomically-guided PET reconstruction methods can be achieved entirely in the image domain with a convolutional neural network (CNN). An entirely image-based CNN post-reconstruction approach has the advantage that no access to PET raw data is needed and, moreover, that the prediction times of trained CNNs are extremely fast on state of the art GPUs which will substantially facilitate the evaluation, fine-tuning and application of anatomically-guided PET reconstruction in real-world clinical settings. In this work, we demonstrate that anatomically-guided PET reconstruction using the asymmetric Bowsher prior can be well-approximated by a purely shift-invariant convolutional neural network in image space allowing the generation of anatomically-guided PET images in almost real-time. We show that by applying dedicated data augmentation techniques in the training phase, in which 16 [18F]FDG and 10 [18F]PE2I data sets were used, lead to a CNN that is robust against the used PET tracer, the noise level of the input PET images and the input MRI contrast. A detailed analysis of our CNN in 36 [18F]FDG, 18 [18F]PE2I, and 7 [18F]FET test data sets demonstrates that the image quality of our trained CNN is very close to the one of the target reconstructions in terms of regional mean recovery and regional structural similarity.

Schramm Georg, Rigie David, Vahle Thomas, Rezaei Ahmadreza, Laere Koen van, Shepherd Timothy, Nuyts Johan, Boada Fernando

2020-Sep-21

Image reconstruction, Machine learning, Magnetic Resonance Imaging, Molecular Imaging, Quantification

Public Health Public Health

A research agenda for ageing in China in the 21st century (2nd edition): Focusing on basic and translational research, long-term care, policy and social networks.

In Ageing research reviews ; h5-index 66.0

One of the key issues facing public healthcare is the global trend of an increasingly ageing society which continues to present policy makers and caregivers with formidable healthcare and socio-economic challenges. Ageing is the primary contributor to a broad spectrum of chronic disorders all associated with a lower quality of life in the elderly. In 2019, the Chinese population constituted 18 % of the world population, with 164.5 million Chinese citizens aged 65 and above (65+), and 26 million aged 80 or above (80+). China has become an ageing society, and as it continues to age it will continue to exacerbate the burden borne by current family and public healthcare systems. Major healthcare challenges involved with caring for the elderly in China include the management of chronic non-communicable diseases (CNCDs), physical frailty, neurodegenerative diseases, cardiovascular diseases, with emerging challenges such as providing sufficient dental care, combating the rising prevalence of sexually transmitted diseases among nursing home communities, providing support for increased incidences of immune diseases, and the growing necessity to provide palliative care for the elderly. At the governmental level, it is necessary to make long-term strategic plans to respond to the pressures of an ageing society, especially to establish a nationwide, affordable, annual health check system to facilitate early diagnosis and provide access to affordable treatments. China has begun work on several activities to address these issues including the recent completion of the of the Ten-year Health-Care Reform project, the implementation of the Healthy China 2030 Action Plan, and the opening of the National Clinical Research Center for Geriatric Disorders. There are also societal challenges, namely the shift from an extended family system in which the younger provide home care for their elderly family members, to the current trend in which young people are increasingly migrating towards major cities for work, increasing reliance on nursing homes to compensate, especially following the outcomes of the 'one child policy' and the 'empty-nest elderly' phenomenon. At the individual level, it is important to provide avenues for people to seek and improve their own knowledge of health and disease, to encourage them to seek medical check-ups to prevent/manage illness, and to find ways to promote modifiable health-related behaviors (social activity, exercise, healthy diets, reasonable diet supplements) to enable healthier, happier, longer, and more productive lives in the elderly. Finally, at the technological or treatment level, there is a focus on modern technologies to counteract the negative effects of ageing. Researchers are striving to produce drugs that can mimic the effects of 'exercising more, eating less', while other anti-ageing molecules from molecular gerontologists could help to improve 'healthspan' in the elderly. Machine learning, 'Big Data', and other novel technologies can also be used to monitor disease patterns at the population level and may be used to inform policy design in the future. Collectively, synergies across disciplines on policies, geriatric care, drug development, personal awareness, the use of big data, machine learning and personalized medicine will transform China into a country that enables the most for its elderly, maximizing and celebrating their longevity in the coming decades. This is the 2nd edition of the review paper (Fang EF et al., Ageing Re. Rev. 2015).

Fang Evandro F, Xie Chenglong, Schenkel Joseph A, Wu Chenkai, Long Qian, Cui Honghua, Aman Yahyah, Frank Johannes, Liao Jing, Zou Huachun, Wang Ninie Y, Wu Jing, Liu Xiaoting, Li Tao, Fang Yuan, Niu Zhangming, Yang Guang, Hong Jiangshui, Wang Qian, Chen Guobing, Li Jun, Chen Hou-Zao, Kang Lin, Su Huanxing, Gilmour Brian C, Zhu Xinqiang, Jiang Hong, He Na, Tao Jun, Leng Sean Xiao, Tong Tanjun, Woo Jean

2020-Sep-21

Ageing policy, Dementia, Inflammageing, Oral ageing, Sexually transmitted diseases, Square dancing

General General

Model-Based Autoencoders for Imputing Discrete single-cell RNA-seq Data.

In Methods (San Diego, Calif.)

Deep neural networks have been widely applied for missing data imputation. However, most existing studies have been focused on imputing continuous data, while discrete data imputation is under-explored. Discrete data is common in real world, especially in research areas of bioinformatics, genetics, and biochemistry. In particular, large amounts of recent genomic data are discrete count data generated from single-cell RNA sequencing (scRNA-seq) technology. Most scRNA-seq studies produce a discrete matrix with prevailing 'false' zero count observations (missing values). To make downstream analyses more effective, imputation, which recovers the missing values, is often conducted as the first step in pre-processing scRNA-seq data. In this paper, we propose a novel Zero-Inflated Negative Binomial (ZINB) model-based autoencoder for imputing discrete scRNA-seq data. The novelties of our method are twofold. First, in addition to optimizing the ZINB likelihood, we propose to explicitly model the dropout events that cause missing values by using the Gumbel-Softmax distribution. Second, the zero-inflated reconstruction is further optimized with respect to the raw count matrix. Extensive experiments on simulation datasets demonstrate that the zero-inflated reconstruction significantly improves imputation accuracy. Real data experiments show that the proposed imputation can enhance separating different cell types and improve the accuracy of differential expression analysis.

Tian Tian, Min Martin Renqiang, Wei Zhi

2020-Sep-21

Deep learning, Imputation, ScRNA-seq

General General

Fast and Flexible Protein Design Using Deep Graph Neural Networks.

In Cell systems

Protein structure and function is determined by the arrangement of the linear sequence of amino acids in 3D space. We show that a deep graph neural network, ProteinSolver, can precisely design sequences that fold into a predetermined shape by phrasing this challenge as a constraint satisfaction problem (CSP), akin to Sudoku puzzles. We trained ProteinSolver on over 70,000,000 real protein sequences corresponding to over 80,000 structures. We show that our method rapidly designs new protein sequences and benchmark them in silico using energy-based scores, molecular dynamics, and structure prediction methods. As a proof-of-principle validation, we use ProteinSolver to generate sequences that match the structure of serum albumin, then synthesize the top-scoring design and validate it in vitro using circular dichroism. ProteinSolver is freely available at http://design.proteinsolver.org and https://gitlab.com/ostrokach/proteinsolver. A record of this paper's transparent peer review process is included in the Supplemental Information.

Strokach Alexey, Becerra David, Corbi-Verge Carles, Perez-Riba Albert, Kim Philip M

2020-Sep-15

constraint satisfaction problem, deep learning, graph neural networks, inverse protein folding, protein design, protein optimization

General General

Artificial Neural Networks for Neuroscientists: A Primer.

In Neuron ; h5-index 148.0

Artificial neural networks (ANNs) are essential tools in machine learning that have drawn increasing attention in neuroscience. Besides offering powerful techniques for data analysis, ANNs provide a new approach for neuroscientists to build models for complex behaviors, heterogeneous neural activity, and circuit connectivity, as well as to explore optimization in neural systems, in ways that traditional models are not designed for. In this pedagogical Primer, we introduce ANNs and demonstrate how they have been fruitfully deployed to study neuroscientific questions. We first discuss basic concepts and methods of ANNs. Then, with a focus on bringing this mathematical framework closer to neurobiology, we detail how to customize the analysis, structure, and learning of ANNs to better address a wide range of challenges in brain research. To help readers garner hands-on experience, this Primer is accompanied with tutorial-style code in PyTorch and Jupyter Notebook, covering major topics.

Yang Guangyu Robert, Wang Xiao-Jing

2020-Sep-23

General General

Existence and possible roles of independent non-CpG methylation in the mammalian brain.

In DNA research : an international journal for rapid publication of reports on genes and genomes

Methylated non-CpGs (mCpHs) in mammalian cells yield weak enrichment signals and colocalize with methylated CpGs (mCpGs), thus have been considered byproducts of hyperactive methyltransferases. However, mCpHs are cell type-specific and associated with epigenetic regulation, although their dependency on mCpGs remains to be elucidated. In this study, we demonstrated that mCpHs colocalize with mCpGs in pluripotent stem cells, but not in brain cells. In addition, profiling genome-wide methylation patterns using a hidden Markov model revealed abundant genomic regions in which CpGs and CpHs are differentially methylated in brain. These regions were frequently located in putative enhancers, and mCpHs within the enhancers increased in correlation with brain age. The enhancers with hypermethylated CpHs were associated with genes functionally enriched in immune responses, and some of the genes were related to neuroinflammation and degeneration. This study provides insight into the roles of non-CpG methylation as an epigenetic code in the mammalian brain genome.

Lee Jong-Hun, Saito Yutaka, Park Sung-Joon, Nakai Kenta

2020-Sep-24

Hidden Markov model, Neuro-epigenetics, Non-CpG methylation

Radiology Radiology

A Prognostic Predictive System Based on Deep Learning for Locoregionally Advanced Nasopharyngeal Carcinoma.

In Journal of the National Cancer Institute

BACKGROUND : Magnetic resonance imaging (MRI) images are crucial unstructured data for prognostic evaluation in nasopharyngeal carcinoma (NPC). We developed and validated a prognostic system based on the MRI features and clinical data of locoregionally advanced NPC (LA-NPC) patients to distinguish low-risk patients with LA-NPC, for whom concurrent chemoradiotherapy (CCRT) is sufficient.

METHODS : This multicenter, retrospective study included 3444 patients with LA-NPC from January 1, 2010, to January 31, 2017. A three-dimensional convolutional neural network was used to learn the image features from pretreatment MRI images. An eXtreme Gradient Boosting model was trained with the MRI features and clinical data to assign an overall score to each patient. Comprehensive evaluations were implemented to assess the performance of the predictive system. We applied the overall score to distinguish high-risk patients from low-risk patients. The clinical benefit of induction chemotherapy (IC) was analyzed in each risk group by survival curves.

RESULTS : We constructed a prognostic system displaying a concordance index of 0.776 (95% CI = 0.746-0.806) for the internal validation cohort and 0.757 (95% CI = 0.695-0.819), 0.719 (95% CI = 0.650-0.789) and 0.746 (95% CI = 0.699-0.793) for the three external validation cohorts, which presented a statistically significant improvement compared to the conventional tumor-node-metastasis (TNM) staging system. In the high-risk group, patients who received IC plus CCRT had better outcomes than patients who received CCRT alone, while there was no statistically significant difference in the low-risk group.

CONCLUSIONS : The proposed framework can capture more complex and heterogeneous information to predict the prognosis of patients with LA-NPC and potentially contribute to clinical decision making.

Qiang Mengyun, Li Chaofeng, Sun Yuyao, Sun Ying, Ke Liangru, Xie Chuanmiao, Zhang Tao, Zou Yujian, Qiu Wenze, Gao Mingyong, Li Yingxue, Li Xiang, Zhan Zejiang, Liu Kuiyuan, Chen Xi, Liang Chixiong, Chen Qiuyan, Mai Haiqiang, Xie Guotong, Guo Xiang, Lv Xing

2020-Sep-24

General General

Association of violence with urban points of interest.

In PloS one ; h5-index 176.0

The association between alcohol outlets and violence has long been recognised, and is commonly used to inform policing and licensing policies (such as staggered closing times and zoning). Less investigated, however, is the association between violent crime and other urban points of interest, which while associated with the city centre alcohol consumption economy, are not explicitly alcohol outlets. Here, machine learning (specifically, LASSO regression) is used to model the distribution of violent crime for the central 9 km2 of ten large UK cities. Densities of 620 different Point of Interest types (sourced from Ordnance Survey) are used as predictors, with the 10 most explanatory variables being automatically selected for each city. Cross validation is used to test generalisability of each model. Results show that the inclusion of additional point of interest types produces a more accurate model, with significant increases in performance over a baseline univariate alcohol-outlet only model. Analysis of chosen variables for city-specific models shows potential candidates for new strategies on a per-city basis, with combined-model variables showing the general trend in POI/violence association across the UK. Although alcohol outlets remain the best individual predictor of violence, other points of interest should also be considered when modelling the distribution of violence in city centres. The presented method could be used to develop targeted, city-specific initiatives that go beyond alcohol outlets and also consider other locations.

Redfern Joseph, Sidorov Kirill, Rosin Paul L, Corcoran Padraig, Moore Simon C, Marshall David

2020

General General

A non-parametric effect-size measure capturing changes in central tendency and data distribution shape.

In PloS one ; h5-index 176.0

MOTIVATION : Calculating the magnitude of treatment effects or of differences between two groups is a common task in quantitative science. Standard effect size measures based on differences, such as the commonly used Cohen's, fail to capture the treatment-related effects on the data if the effects were not reflected by the central tendency. The present work aims at (i) developing a non-parametric alternative to Cohen's d, which (ii) circumvents some of its numerical limitations and (iii) involves obvious changes in the data that do not affect the group means and are therefore not captured by Cohen's d.

RESULTS : We propose "Impact" as a novel non-parametric measure of effect size obtained as the sum of two separate components and includes (i) a difference-based effect size measure implemented as the change in the central tendency of the group-specific data normalized to pooled variability and (ii) a data distribution shape-based effect size measure implemented as the difference in probability density of the group-specific data. Results obtained on artificial and empirical data showed that "Impact"is superior to Cohen's d by its additional second component in detecting clearly visible effects not reflected in central tendencies. The proposed effect size measure is invariant to the scaling of the data, reflects changes in the central tendency in cases where differences in the shape of probability distributions between subgroups are negligible, but captures changes in probability distributions as effects and is numerically stable even if the variances of the data set or its subgroups disappear.

CONCLUSIONS : The proposed effect size measure shares the ability to observe such an effect with machine learning algorithms. Therefore, the proposed effect size measure is particularly well suited for data science and artificial intelligence-based knowledge discovery from big and heterogeneous data.

Lötsch Jörn, Ultsch Alfred

2020

oncology Oncology

Computer Extracted Features from Initial H&E Tissue Biopsies Predict Disease Progression for Prostate Cancer Patients on Active Surveillance.

In Cancers

In this work, we assessed the ability of computerized features of nuclear morphology from diagnostic biopsy images to predict prostate cancer (CaP) progression in active surveillance (AS) patients. Improved risk characterization of AS patients could reduce over-testing of low-risk patients while directing high-risk patients to therapy. A total of 191 (125 progressors, 66 non-progressors) AS patients from a single site were identified using The Johns Hopkins University's (JHU) AS-eligibility criteria. Progression was determined by pathologists at JHU. 30 progressors and 30 non-progressors were randomly selected to create the training cohort D1 (n = 60). The remaining patients comprised the validation cohort D2 (n = 131). Digitized Hematoxylin & Eosin (H&E) biopsies were annotated by a pathologist for CaP regions. Nuclei within the cancer regions were segmented using a watershed method and 216 nuclear features describing position, shape, orientation, and clustering were extracted. Six features associated with disease progression were identified using D1 and then used to train a machine learning classifier. The classifier was validated on D2. The classifier was further compared on a subset of D2 (n = 47) against pro-PSA, an isoform of prostate specific antigen (PSA) more linked with CaP, in predicting progression. Performance was evaluated with area under the curve (AUC). A combination of nuclear spatial arrangement, shape, and disorder features were associated with progression. The classifier using these features yielded an AUC of 0.75 in D2. On the 47 patient subset with pro-PSA measurements, the classifier yielded an AUC of 0.79 compared to an AUC of 0.42 for pro-PSA. Nuclear morphometric features from digitized H&E biopsies predicted progression in AS patients. This may be useful for identifying AS-eligible patients who could benefit from immediate curative therapy. However, additional multi-site validation is needed.

Chandramouli Sacheth, Leo Patrick, Lee George, Elliott Robin, Davis Christine, Zhu Guangjing, Fu Pingfu, Epstein Jonathan I, Veltri Robert, Madabhushi Anant

2020-Sep-21

active surveillance, machine learning, pathology, prostate cancer

General General

Deep neural network models for identifying incident dementia using claims and EHR datasets.

In PloS one ; h5-index 176.0

This study investigates the use of deep learning methods to improve the accuracy of a predictive model for dementia, and compares the performance to a traditional machine learning model. With sufficient accuracy the model can be deployed as a first round screening tool for clinical follow-up including neurological examination, neuropsychological testing, imaging and recruitment to clinical trials. Seven cohorts with two years of data, three to eight years prior to index date, and an incident cohort were created. Four trained models for each cohort, boosted trees, feed forward network, recurrent neural network and recurrent neural network with pre-trained weights, were constructed and their performance compared using validation and test data. The incident model had an AUC of 94.4% and F1 score of 54.1%. Eight years removed from index date the AUC and F1 scores were 80.7% and 25.6%, respectively. The results for the remaining cohorts were between these ranges. Deep learning models can result in significant improvement in performance but come at a cost in terms of run times and hardware requirements. The results of the model at index date indicate that this modeling can be effective at stratifying patients at risk of dementia. At this time, the inability to sustain this quality at longer lead times is more an issue of data availability and quality rather than one of algorithm choices.

Nori Vijay S, Hane Christopher A, Sun Yezhou, Crown William H, Bleicher Paul A

2020

Public Health Public Health

Machine learning and dengue forecasting: Comparing random forests and artificial neural networks for predicting dengue burden at national and sub-national scales in Colombia.

In PLoS neglected tropical diseases ; h5-index 79.0

The robust estimate and forecast capability of random forests (RF) has been widely recognized, however this ensemble machine learning method has not been widely used in mosquito-borne disease forecasting. In this study, two sets of RF models were developed at the national (pooled department-level data) and department level in Colombia to predict weekly dengue cases for 12-weeks ahead. A pooled national model based on artificial neural networks (ANN) was also developed and used as a comparator to the RF models. The various predictors included historic dengue cases, satellite-derived estimates for vegetation, precipitation, and air temperature, as well as population counts, income inequality, and education. Our RF model trained on the pooled national data was more accurate for department-specific weekly dengue cases estimation compared to a local model trained only on the department's data. Additionally, the forecast errors of the national RF model were smaller to those of the national pooled ANN model and were increased with the forecast horizon increasing from one-week-ahead (mean absolute error, MAE: 9.32) to 12-weeks ahead (MAE: 24.56). There was considerable variation in the relative importance of predictors dependent on forecast horizon. The environmental and meteorological predictors were relatively important for short-term dengue forecast horizons while socio-demographic predictors were relevant for longer-term forecast horizons. This study demonstrates the potential of RF in dengue forecasting with a feasible approach of using a national pooled model to forecast at finer spatial scales. Furthermore, including sociodemographic predictors is likely to be helpful in capturing longer-term dengue trends.

Zhao Naizhuo, Charland Katia, Carabali Mabel, Nsoesie Elaine O, Maheu-Giroux Mathieu, Rees Erin, Yuan Mengru, Garcia Balaguera Cesar, Jaramillo Ramirez Gloria, Zinszer Kate

2020-Sep-24

General General

Epidemiological models for predicting Ross River virus in Australia: A systematic review.

In PLoS neglected tropical diseases ; h5-index 79.0

Ross River virus (RRV) is the most common and widespread arbovirus in Australia. Epidemiological models of RRV increase understanding of RRV transmission and help provide early warning of outbreaks to reduce incidence. However, RRV predictive models have not been systematically reviewed, analysed, and compared. The hypothesis of this systematic review was that summarising the epidemiological models applied to predict RRV disease and analysing model performance could elucidate drivers of RRV incidence and transmission patterns. We performed a systematic literature search in PubMed, EMBASE, Web of Science, Cochrane Library, and Scopus for studies of RRV using population-based data, incorporating at least one epidemiological model and analysing the association between exposures and RRV disease. Forty-three articles, all of high or medium quality, were included. Twenty-two (51.2%) used generalised linear models and 11 (25.6%) used time-series models. Climate and weather data were used in 27 (62.8%) and mosquito abundance or related data were used in 14 (32.6%) articles as model covariates. A total of 140 models were included across the articles. Rainfall (69 models, 49.3%), temperature (66, 47.1%) and tide height (45, 32.1%) were the three most commonly used exposures. Ten (23.3%) studies published data related to model performance. This review summarises current knowledge of RRV modelling and reveals a research gap in comparing predictive methods. To improve predictive accuracy, new methods for forecasting, such as non-linear mixed models and machine learning approaches, warrant investigation.

Qian Wei, Viennet Elvina, Glass Kathryn, Harley David

2020-Sep-24

General General

Classification of estrogenic compounds by coupling high content analysis and machine learning algorithms.

In PLoS computational biology

Environmental toxicants affect human health in various ways. Of the thousands of chemicals present in the environment, those with adverse effects on the endocrine system are referred to as endocrine-disrupting chemicals (EDCs). Here, we focused on a subclass of EDCs that impacts the estrogen receptor (ER), a pivotal transcriptional regulator in health and disease. Estrogenic activity of compounds can be measured by many in vitro or cell-based high throughput assays that record various endpoints from large pools of cells, and increasingly at the single-cell level. To simultaneously capture multiple mechanistic ER endpoints in individual cells that are affected by EDCs, we previously developed a sensitive high throughput/high content imaging assay that is based upon a stable cell line harboring a visible multicopy ER responsive transcription unit and expressing a green fluorescent protein (GFP) fusion of ER. High content analysis generates voluminous multiplex data comprised of minable features that describe numerous mechanistic endpoints. In this study, we present a machine learning pipeline for rapid, accurate, and sensitive assessment of the endocrine-disrupting potential of benchmark chemicals based on data generated from high content analysis. The multidimensional imaging data was used to train a classification model to ultimately predict the impact of unknown compounds on the ER, either as agonists or antagonists. To this end, both linear logistic regression and nonlinear Random Forest classifiers were benchmarked and evaluated for predicting the estrogenic activity of unknown compounds. Furthermore, through feature selection, data visualization, and model discrimination, the most informative features were identified for the classification of ER agonists/antagonists. The results of this data-driven study showed that highly accurate and generalized classification models with a minimum number of features can be constructed without loss of generality, where these machine learning models serve as a means for rapid mechanistic/phenotypic evaluation of the estrogenic potential of many chemicals.

Mukherjee Rajib, Beykal Burcu, Szafran Adam T, Onel Melis, Stossi Fabio, Mancini Maureen G, Lloyd Dillon, Wright Fred A, Zhou Lan, Mancini Michael A, Pistikopoulos Efstratios N

2020-Sep-24

General General

Cytomegalovirus viral load kinetics as surrogate endpoints after allogeneic transplantation.

In The Journal of clinical investigation ; h5-index 129.0

BACKGROUND : Viral load surrogate endpoints transformed development of HIV and hepatitis C therapeutics. Surrogate endpoints for cytomegalovirus (CMV)-related morbidity and mortality could advance development of antiviral treatments. While observational data support using CMV viral load (VL) as a trial endpoint, randomized controlled trials (RCT) demonstrating direct associations between virologic markers and clinical endpoints are lacking.

METHODS : We performed CMV DNA polymerase chain reaction (PCR) on frozen serum samples from the only placebo-controlled RCT of ganciclovir for early treatment of CMV after hematopoietic cell transplantation (HCT). We used established criteria to assess VL kinetics as surrogates for CMV disease or death by weeks 8, 24, and 48 after randomization and quantified antiviral effects captured by each marker. We used ensemble-based machine learning to assess the predictive ability of VL kinetics and performed this analysis on a ganciclovir prophylaxis RCT for validation.

RESULTS : VL suppression with ganciclovir reduced cumulative incidence of CMV disease and death for 20 years after HCT. Mean VL, peak VL, and change in VL during the first five weeks of treatment fulfilled the Prentice definition for surrogacy, capturing > 95% of ganciclovir's effect, and yielded highly sensitive and specific predictions by week 48. In the prophylaxis trial, viral shedding rate satisfied the Prentice definition for CMV disease by week 24.

CONCLUSION : Our results support using CMV VL kinetics as surrogates for CMV disease, provide a framework for developing CMV preventative and therapeutic agents, and support reductions in viral load as the mechanism through which antivirals reduce CMV disease.

Duke Elizabeth R, Williamson Brian D, Borate Bhavesh, Golob Jonathan L, Wychera Chiara, Stevens-Ayers Terry, Huang Meei-Li, Cossrow Nicole, Wan Hong, Mast T Christopher, Marks Morgan A, Flowers Mary, Jerome Keith R, Corey Lawrence, Gilbert Peter B, Schiffer Joshua T, Boeckh Michael

2020-Sep-24

Clinical Trials, Drug therapy, Infectious disease, Stem cell transplantation

Surgery Surgery

Single-cell transcriptomics of mouse kidney transplants reveals a myeloid cell pathway for transplant rejection.

In JCI insight

Myeloid cells are increasingly recognized as a major player in transplant rejection. Here, we used a murine kidney transplantation model and single-cell transcriptomics to dissect the contribution of myeloid cell subsets and their potential signaling pathways to kidney transplant rejection. Using a variety of bioinformatic techniques including machine learning, we demonstrated that kidney allograft-infiltrating myeloid cells followed a trajectory of differentiating from monocytes to pro-inflammatory macrophages, and exhibited distinct interactions with kidney allograft parenchymal cells. While this process correlated with a unique pattern of myeloid cell transcripts, a top gene identified was Axl, a member of the receptor tyrosine kinase family TAM (Tyro3/Axl/Mertk). Using kidney transplant recipients with Axl gene deficiency, we further demonstrated that Axl augmented intragraft differentiation of pro-inflammatory macrophages, likely via its effect on the transcription factor Cebpb. This in turn promoted intragraft recruitment, differentiation and proliferation of donor-specific T cells, and enhanced early allograft inflammation evidenced by histology. We conclude that myeloid cell Axl expression identified by single-cell transcriptomics of kidney allografts in our study plays a major role in promoting intragraft myeloid cell and T cell differentiation, and presents a novel therapeutic target for controlling kidney allograft rejection and improving kidney allograft survival.

Dangi Anil, Natesh Naveen R, Husain Irma, Ji Zhicheng, Barisoni Laura, Kwun Jean, Shen Xiling, Thorp Edward B, Luo Xunrong

2020-Sep-24

Bioinformatics, Immunology, Macrophages, Organ transplantation, Transplantation

General General

Projective Double Reconstructions Based Dictionary Learning Algorithm for Cross-Domain Recognition.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

Dictionary learning plays a significant role in the field of machine learning. Existing works mainly focus on learning dictionary from a single domain. In this paper, we propose a novel projective double reconstructions (PDR) based dictionary learning algorithm for cross-domain recognition. Owing the distribution discrepancy between different domains, the label information is hard utilized for improving discriminability of dictionary fully. Thus, we propose a more flexible label consistent term and associate it with each dictionary item, which makes the reconstruction coefficients have more discriminability as much as possible. Due to the intrinsic correlation between cross-domain data, the data should be reconstructed with each other. Based on this consideration, we further propose a projective double reconstructions scheme to guarantee that the learned dictionary has the abilities of data itself reconstruction and data crossreconstruction. This also guarantees that the data from different domains can be boosted mutually for obtaining a good data alignment, making the learned dictionary have more transferability. We integrate the double reconstructions, label consistency constraint and classifier learning into a unified objective and its solution can be obtained by proposed optimization algorithm that is more efficient than the conventional l1 optimization based dictionary learning methods. The experiments show that the proposed PDR not only greatly reduces the time complexity for both training and testing, but also outperforms over the stateof- the-art methods.

Han Na, Wu Jigang, Fang Xiaozhao, Teng Shaohua, Zhou Guoxu, Xie Shengli, Li Xuelong

2020-Sep-24

oncology Oncology

Effect of an Artificial Intelligence Clinical Decision Support System on Treatment Decisions for Complex Breast Cancer.

In JCO clinical cancer informatics

PURPOSE : To examine the impact of a clinical decision support system (CDSS) on breast cancer treatment decisions and adherence to National Comprehensive Cancer Center (NCCN) guidelines.

PATIENTS AND METHODS : A cross-sectional observational study was conducted involving 1,977 patients at high risk for recurrent or metastatic breast cancer from the Chinese Society of Clinical Oncology. Ten oncologists provided blinded treatment recommendations for an average of 198 patients before and after viewing therapeutic options offered by the CDSS. Univariable and bivariable analyses of treatment changes were performed, and multivariable logistic regressions were estimated to examine the effects of physician experience (years), patient age, and receptor subtype/TNM stage.

RESULTS : Treatment decisions changed in 105 (5%) of 1,977 patients and were concentrated in those with hormone receptor (HR)-positive disease or stage IV disease in the first-line therapy setting (73% and 58%, respectively). Logistic regressions showed that decision changes were more likely in those with HR-positive cancer (odds ratio [OR], 1.58; P < .05) and less likely in those with stage IIA (OR, 0.29; P < .05) or IIIA cancer (OR, 0.08; P < .01). Reasons cited for changes included consideration of the CDSS therapeutic options (63% of patients), patient factors highlighted by the tool (23%), and the decision logic of the tool (13%). Patient age and oncologist experience were not associated with decision changes. Adherence to NCCN treatment guidelines increased slightly after using the CDSS (0.5%; P = .003).

CONCLUSION : Use of an artificial intelligence-based CDSS had a significant impact on treatment decisions and NCCN guideline adherence in HR-positive breast cancers. Although cases of stage IV disease in the first-line therapy setting were also more likely to be changed, the effect was not statistically significant (P = .22). Additional research on decision impact, patient-physician communication, learning, and clinical outcomes is needed to establish the overall value of the technology.

Xu Fengrui, Sepúlveda Martín-J, Jiang Zefei, Wang Haibo, Li Jianbin, Liu Zhenzhen, Yin Yongmei, Roebuck M Christopher, Shortliffe Edward H, Yan Min, Song Yuhua, Geng Cuizhi, Tang Jinhai, Purcell Jackson Gretchen, Preininger Anita M, Rhee Kyu

2020-Sep

General General

ML Models of Vibrating H2CO: Comparing Reproducing Kernels, FCHL and PhysNet.

In The journal of physical chemistry. A

Machine Learning (ML) has become a promising tool for improving the quality of atomistic simulations. Using formaldehyde as a benchmark system for intramolecular interactions, a comparative assessment of ML models based on state-of-the-art variants of deep neural networks (NN), reproducing kernel Hilbert space (RKHS+F), and kernel ridge regression (KRR) is presented. Learning curves for energies and atomic forces indicate rapid convergence towards excellent predictions for B3LYP, MP2, and CCSD(T)-F12 reference results for modestly sized (in the hundreds) training sets. Typically, learning curve off-sets decay as one goes from NN (PhysNet) to RKHS+F to KRR (FCHL). Conversely, the predictive power for extrapolation of energies towards new geometries increases in the same order with RKHS+F and FCHL performing almost equally. For harmonic vibrational frequencies, the picture is less clear, with PhysNet and FCHL yielding respectively flat learning at ∽1 and ∼0.2 cm-1 no matter which reference method, while RKHS+F models level off for B3LYP, and exhibit continued improvements for MP2 and CCSD(T)-F12. Finite-temperature molecular dynamics (MD) simulations with the same initial conditions yield indistinguishable infrared spectra with good performance compared with experiment except for the high-frequency modes involving hydrogen stretch motion which is a known limitation of MD for vibrational spectroscopy. For sufficiently large training set sizes all three models can detect insufficient convergence (``noise'') of the reference electronic structure calculations in that the learning curves level off. Transfer learning (TL) from B3LYP to CCSD(T)-F12 with PhysNet indicates that additional improvements in data efficiency can be achieved.

Käser Silvan, Koner Debasish, Christensen Anders S, von Lilienfeld O Anatole, Meuwly Markus

2020-Sep-24

General General

Deciphering the Allosteric Process of Phaeodactylum tricornutum Aureochrome 1a LOV Domain.

In The journal of physical chemistry. B

The conformational-driven allosteric protein diatom Phaeodactylum tricornutum aureochrome 1a (PtAu1a) differs from other light-oxygen-voltage (LOV) proteins for its uncommon structural topology. The mechanism of signaling transduction in PtAu1a LOV domain (AuLOV) including flanking helices remains unclear because of this dissimilarity, which hinders the study of PtAu1a as an optogenetic tool. To clarify this mechanism, we employed a combination of tree-based machine learning models, Markov state models, machine learning based community analysis, and transition path theory to quantitatively analyze the allosteric process. Our results are in good agreement with the reported experimental findings and reveal a previously overlooked Cα helix and protein linkers as important in promoting the protein conformational changes. This integrated approach can be considered as a general workflow and applied on other allosteric proteins to provide detailed information about their allosteric mechanisms.

Tian Hao, Trozzi Francesco, Zoltowski Brian D, Tao Peng

2020-Sep-24

Radiology Radiology

Machine Learning Model to Predict Pseudoprogression Versus Progression in Glioblastoma Using MRI: A Multi-Institutional Study (KROG 18-07).

In Cancers

Some patients with glioblastoma show a worsening presentation in imaging after concurrent chemoradiation, even when they receive gross total resection. Previously, we showed the feasibility of a machine learning model to predict pseudoprogression (PsPD) versus progressive disease (PD) in glioblastoma patients. The previous model was based on the dataset from two institutions (termed as the Seoul National University Hospital (SNUH) dataset, N = 78). To test this model in a larger dataset, we collected cases from multiple institutions that raised the problem of PsPD vs. PD diagnosis in clinics (Korean Radiation Oncology Group (KROG) dataset, N = 104). The dataset was composed of brain MR images and clinical information. We tested the previous model in the KROG dataset; however, that model showed limited performance. After hyperparameter optimization, we developed a deep learning model based on the whole dataset (N = 182). The 10-fold cross validation revealed that the micro-average area under the precision-recall curve (AUPRC) was 0.86. The calibration model was constructed to estimate the interpretable probability directly from the model output. After calibration, the final model offers clinical probability in a web-user interface.

Jang Bum-Sup, Park Andrew J, Jeon Seung Hyuck, Kim Il Han, Lim Do Hoon, Park Shin-Hyung, Lee Ju Hye, Chang Ji Hyun, Cho Kwan Ho, Kim Jin Hee, Sunwoo Leonard, Choi Seung Hong, Kim In Ah

2020-Sep-21

glioblastoma, machine learning, pseudoprogression, radiotherapy

Radiology Radiology

Detection of COVID-19 Using Deep Learning Algorithms on Chest Radiographs.

In Journal of thoracic imaging

PURPOSE : To evaluate the performance of a deep learning (DL) algorithm for the detection of COVID-19 on chest radiographs (CXR).

MATERIALS AND METHODS : In this retrospective study, a DL model was trained on 112,120 CXR images with 14 labeled classifiers (ChestX-ray14) and fine-tuned using initial CXR on hospital admission of 509 patients, who had undergone COVID-19 reverse transcriptase-polymerase chain reaction (RT-PCR). The test set consisted of a CXR on presentation of 248 individuals suspected of COVID-19 pneumonia between February 16 and March 3, 2020 from 4 centers (72 RT-PCR positives and 176 RT-PCR negatives). The CXR were independently reviewed by 3 radiologists and using the DL algorithm. Diagnostic performance was compared with radiologists' performance and was assessed by area under the receiver operating characteristics (AUC).

RESULTS : The median age of the subjects in the test set was 61 (interquartile range: 39 to 79) years (51% male). The DL algorithm achieved an AUC of 0.81, sensitivity of 0.85, and specificity of 0.72 in detecting COVID-19 using RT-PCR as the reference standard. On subgroup analyses, the model achieved an AUC of 0.79, sensitivity of 0.80, and specificity of 0.74 in detecting COVID-19 in patients presented with fever or respiratory systems and an AUC of 0.87, sensitivity of 0.85, and specificity of 0.81 in distinguishing COVID-19 from other forms of pneumonia. The algorithm significantly outperforms human readers (P<0.001 using DeLong test) with higher sensitivity (P=0.01 using McNemar test).

CONCLUSIONS : A DL algorithm (COV19NET) for the detection of COVID-19 on chest radiographs can potentially be an effective tool in triaging patients, particularly in resource-stretched health-care systems.

Chiu Wan Hang Keith, Vardhanabhuti Varut, Poplavskiy Dmytro, Yu Philip Leung Ho, Du Richard, Yap Alistair Yun Hee, Zhang Sailong, Fong Ambrose Ho-Tung, Chin Thomas Wing-Yan, Lee Jonan Chun Yin, Leung Siu Ting, Lo Christine Shing Yen, Lui Macy Mei-Sze, Fang Benjamin Xin Hao, Ng Ming-Yen, Kuo Michael D

2020-Sep-22

General General

Virtual molecular projections and convolutional neural networks for end-to-end modeling of nanoparticle activities and properties.

In Analytical chemistry

Digitalizing complex nanostructures into data structures suitable for machine learning modeling without losing nanostructure information has been a major challenge. Deep learning frameworks, particularly convolutional neural networks (CNNs), are especially adept at handling multidimensional and complex inputs. In this study, CNNs were applied for modeling of nanoparticle activities exclusively from nanostructures. The nanostructures were represented by virtual molecular projections, a multidimensional digitalization of nanostructures, and used as input data to train CNNs. To this end, 77 nanoparticles with various activity and/or physicochemical property results were used for modeling. The resulting CNN model predictions show high correlations with the experimental results. An analysis of a trained CNN quantitatively showed neurons were able to recognize distinct nanostructure features critical to activities and physicochemical properties. This "end-to-end" deep learning approach is well-suited to digitalize complex nanostructures for data-driven machine learning modeling and can be broadly applied to rationally design nanoparticles with desired activities.

Russo Daniel P, Yan Xiliang, Shende Sunil, Huang Heng, Yan Bing, Zhu Hao

2020-Sep-24

General General

[Keratoconus detection and classification from parameters of the Corvis®ST : A study based on algorithms of machine learning].

In Der Ophthalmologe : Zeitschrift der Deutschen Ophthalmologischen Gesellschaft

BACKGROUND AND OBJECTIVE : In the last decades increasingly more systems of artificial intelligence have been established in medicine, which identify diseases or pathologies or discriminate them from complimentary diseases. Up to now the Corvis®ST (Corneal Visualization Scheimpflug Technology, Corvis®ST, Oculus, Wetzlar, Germany) yielded a binary index for classifying keratoconus but did not enable staging. The purpose of this study was to develop a prediction model, which mimics the topographic keratoconus classification index (TKC) of the Pentacam high resolution (HR, Oculus) with measurement parameters extracted from the Corvis®ST.

PATIENTS AND METHODS : In this study 60 measurements from normal subjects (TKC 0) and 379 eyes with keratoconus (TKC 1-4) were recruited. After measurement with the Pentacam HR (target parameter TKC) a measurement with the Corvis®ST device was performed. From this device 6 dynamic response parameters were extracted, which were included in the Corvis biomechanical index (CBI) provided by the Corvis®ST (ARTh, SP-A1, DA ratio 1 mm, DA ratio 2 mm, A1 velocity, max. deformation amplitude). In addition to the TKC as the target, the binarized TKC (1: TKC 1-4, 0: TKC 0) was modelled. The performance of the model was validated with accuracy as an indicator for correct classification made by the algorithm. Misclassifications in the modelling were penalized by the number of stages of deviation between the modelled and measured TKC values.

RESULTS : A total of 24 different models of supervised machine learning from 6 different families were tested. For modelling of the TKC stages 0-4, the algorithm based on a support vector machine (SVM) with linear kernel showed the best performance with an accuracy of 65.1% correct classifications. For modelling of binarized TKC, a decision tree with a coarse resolution showed a superior performance with an accuracy of 95.2% correct classifications followed by the SVM with linear or quadratic kernel and a nearest neighborhood classifier with cubic kernel (94.5% each).

CONCLUSION : This study aimed to show the principle of supervised machine learning applied to a set-up for the modelled classification of keratoconus staging. Preprocessed measurement data extracted from the Corvis®ST device were used to mimic the TKC provided by the Pentacam device with a series of different algorithms of machine learning.

Langenbucher Achim, Häfner Larissa, Eppig Timo, Seitz Berthold, Szentmáry Nóra, Flockerzi Elias

2020-Sep-24

Artificial intelligence, Corvis, Keratoconus, Scheimpflug corneal tomography, Supervised machine learning

Radiology Radiology

Validation of a Deep Learning Algorithm for the Detection of Malignant Pulmonary Nodules in Chest Radiographs.

In JAMA network open

Importance : The improvement of pulmonary nodule detection, which is a challenging task when using chest radiographs, may help to elevate the role of chest radiographs for the diagnosis of lung cancer.

Objective : To assess the performance of a deep learning-based nodule detection algorithm for the detection of lung cancer on chest radiographs from participants in the National Lung Screening Trial (NLST).

Design, Setting, and Participants : This diagnostic study used data from participants in the NLST ro assess the performance of a deep learning-based artificial intelligence (AI) algorithm for the detection of pulmonary nodules and lung cancer on chest radiographs using separate training (in-house) and validation (NLST) data sets. Baseline (T0) posteroanterior chest radiographs from 5485 participants (full T0 data set) were used to assess lung cancer detection performance, and a subset of 577 of these images (nodule data set) were used to assess nodule detection performance. Participants aged 55 to 74 years who currently or formerly (ie, quit within the past 15 years) smoked cigarettes for 30 pack-years or more were enrolled in the NLST at 23 US centers between August 2002 and April 2004. Information on lung cancer diagnoses was collected through December 31, 2009. Analyses were performed between August 20, 2019, and February 14, 2020.

Exposures : Abnormality scores produced by the AI algorithm.

Main Outcomes and Measures : The performance of an AI algorithm for the detection of lung nodules and lung cancer on radiographs, with lung cancer incidence and mortality as primary end points.

Results : A total of 5485 participants (mean [SD] age, 61.7 [5.0] years; 3030 men [55.2%]) were included, with a median follow-up duration of 6.5 years (interquartile range, 6.1-6.9 years). For the nodule data set, the sensitivity and specificity of the AI algorithm for the detection of pulmonary nodules were 86.2% (95% CI, 77.8%-94.6%) and 85.0% (95% CI, 81.9%-88.1%), respectively. For the detection of all cancers, the sensitivity was 75.0% (95% CI, 62.8%-87.2%), the specificity was 83.3% (95% CI, 82.3%-84.3%), the positive predictive value was 3.8% (95% CI, 2.6%-5.0%), and the negative predictive value was 99.8% (95% CI, 99.6%-99.9%). For the detection of malignant pulmonary nodules in all images of the full T0 data set, the sensitivity was 94.1% (95% CI, 86.2%-100.0%), the specificity was 83.3% (95% CI, 82.3%-84.3%), the positive predictive value was 3.4% (95% CI, 2.2%-4.5%), and the negative predictive value was 100.0% (95% CI, 99.9%-100.0%). In digital radiographs of the nodule data set, the AI algorithm had higher sensitivity (96.0% [95% CI, 88.3%-100.0%] vs 88.0% [95% CI, 75.3%-100.0%]; P = .32) and higher specificity (93.2% [95% CI, 89.9%-96.5%] vs 82.8% [95% CI, 77.8%-87.8%]; P = .001) for nodule detection compared with the NLST radiologists. For malignant pulmonary nodule detection on digital radiographs of the full T0 data set, the sensitivity of the AI algorithm was higher (100.0% [95% CI, 100.0%-100.0%] vs 94.1% [95% CI, 82.9%-100.0%]; P = .32) compared with the NLST radiologists, and the specificity (90.9% [95% CI, 89.6%-92.1%] vs 91.0% [95% CI, 89.7%-92.2%]; P = .91), positive predictive value (8.2% [95% CI, 4.4%-11.9%] vs 7.8% [95% CI, 4.1%-11.5%]; P = .65), and negative predictive value (100.0% [95% CI, 100.0%-100.0%] vs 99.9% [95% CI, 99.8%-100.0%]; P = .32) were similar to those of NLST radiologists.

Conclusions and Relevance : In this study, the AI algorithm performed better than NLST radiologists for the detection of pulmonary nodules on digital radiographs. When used as a second reader, the AI algorithm may help to detect lung cancer.

Yoo Hyunsuk, Kim Ki Hwan, Singh Ramandeep, Digumarthy Subba R, Kalra Mannudeep K

2020-Sep-01

oncology Oncology

Validation of a Machine Learning Algorithm to Predict 180-Day Mortality for Outpatients With Cancer.

In JAMA oncology ; h5-index 85.0

Importance : Machine learning (ML) algorithms can identify patients with cancer at risk of short-term mortality to inform treatment and advance care planning. However, no ML mortality risk prediction algorithm has been prospectively validated in oncology or compared with routinely used prognostic indices.

Objective : To validate an electronic health record-embedded ML algorithm that generated real-time predictions of 180-day mortality risk in a general oncology cohort.

Design, Setting, and Participants : This prognostic study comprised a prospective cohort of patients with outpatient oncology encounters between March 1, 2019, and April 30, 2019. An ML algorithm, trained on retrospective data from a subset of practices, predicted 180-day mortality risk between 4 and 8 days before a patient's encounter. Patient encounters took place in 18 medical or gynecologic oncology practices, including 1 tertiary practice and 17 general oncology practices, within a large US academic health care system. Patients aged 18 years or older with outpatient oncology or hematology and oncology encounters were included in the analysis. Patients were excluded if their appointment was scheduled after weekly predictions were generated and if they were only evaluated in benign hematology, palliative care, or rehabilitation practices.

Exposures : Gradient-boosting ML binary classifier.

Main Outcomes and Measures : The primary outcome was the patients' 180-day mortality from the index encounter. The primary performance metric was the area under the receiver operating characteristic curve (AUC).

Results : Among 24 582 patients, 1022 (4.2%) died within 180 days of their index encounter. Their median (interquartile range) age was 64.6 (53.6-73.2) years, 15 319 (62.3%) were women, 18 015 (76.0%) were White, and 10 658 (43.4%) were seen in the tertiary practice. The AUC was 0.89 (95% CI, 0.88-0.90) for the full cohort. The AUC varied across disease-specific groups within the tertiary practice (AUC ranging from 0.74 to 0.96) but was similar between the tertiary and general oncology practices. At a prespecified 40% mortality risk threshold used to differentiate high- vs low-risk patients, observed 180-day mortality was 45.2% (95% CI, 41.3%-49.1%) in the high-risk group vs 3.1% (95% CI, 2.9%-3.3%) in the low-risk group. Integrating the algorithm into the Eastern Cooperative Oncology Group and Elixhauser comorbidity index-based classifiers resulted in favorable reclassification (net reclassification index, 0.09 [95% CI, 0.04-0.14] and 0.23 [95% CI, 0.20-0.27], respectively).

Conclusions and Relevance : In this prognostic study, an ML algorithm was feasibly integrated into the electronic health record to generate real-time, accurate predictions of short-term mortality for patients with cancer and outperformed routinely used prognostic indices. This algorithm may be used to inform behavioral interventions and prompt earlier conversations about goals of care and end-of-life preferences among patients with cancer.

Manz Christopher R, Chen Jinbo, Liu Manqing, Chivers Corey, Regli Susan Harkness, Braun Jennifer, Draugelis Michael, Hanson C William, Shulman Lawrence N, Schuchter Lynn M, O’Connor Nina, Bekelman Justin E, Patel Mitesh S, Parikh Ravi B

2020-Sep-24

Surgery Surgery

Virtual Reality Anterior Cervical Discectomy and Fusion Simulation on the Novel Sim-Ortho Platform: Validation Studies.

In Operative neurosurgery (Hagerstown, Md.)

BACKGROUND : Virtual reality spine simulators are emerging as potential educational tools to assess and train surgical procedures in safe environments. Analysis of validity is important in determining the educational utility of these systems.

OBJECTIVE : To assess face, content, and construct validity of a C4-C5 anterior cervical discectomy and fusion simulation on the Sim-Ortho virtual reality platform, developed by OSSimTechTM (Montreal, Canada) and the AO Foundation (Davos, Switzerland).

METHODS : Spine surgeons, spine fellows, along with neurosurgical and orthopedic residents, performed a simulated C4-C5 anterior cervical discectomy and fusion on the Sim-Ortho system. Participants were separated into 3 categories: post-residents (spine surgeons and spine fellows), senior residents, and junior residents. A Likert scale was used to assess face and content validity. Construct validity was evaluated by investigating differences between the 3 groups on metrics derived from simulator data. The Kruskal-Wallis test was employed to compare groups and a post-hoc Dunn's test with a Bonferroni correction was utilized to investigate differences between groups on significant metrics.

RESULTS : A total of 21 individuals were included: 9 post-residents, 5 senior residents, and 7 junior residents. The post-resident group rated face and content validity, median ≥4, for the overall procedure and at least 1 tool in each of the 4 steps. Significant differences (P < .05) were found between the post-resident group and senior and/or junior residents on at least 1 metric for each component of the simulation.

CONCLUSION : The C4-C5 anterior cervical discectomy and fusion simulation on the Sim-Ortho platform demonstrated face, content, and construct validity suggesting its utility as a formative educational tool.

Ledwos Nicole, Mirchi Nykan, Bissonnette Vincent, Winkler-Schwartz Alexander, Yilmaz Recai, Del Maestro Rolando F

2020-Sep-24

Anterior cervical discectomy and fusion, Neurosurgical simulation, Surgical education, Surgical simulation, Validation, Virtual reality

Radiology Radiology

Detection of COVID-19 Using Deep Learning Algorithms on Chest Radiographs.

In Journal of thoracic imaging

PURPOSE : To evaluate the performance of a deep learning (DL) algorithm for the detection of COVID-19 on chest radiographs (CXR).

MATERIALS AND METHODS : In this retrospective study, a DL model was trained on 112,120 CXR images with 14 labeled classifiers (ChestX-ray14) and fine-tuned using initial CXR on hospital admission of 509 patients, who had undergone COVID-19 reverse transcriptase-polymerase chain reaction (RT-PCR). The test set consisted of a CXR on presentation of 248 individuals suspected of COVID-19 pneumonia between February 16 and March 3, 2020 from 4 centers (72 RT-PCR positives and 176 RT-PCR negatives). The CXR were independently reviewed by 3 radiologists and using the DL algorithm. Diagnostic performance was compared with radiologists' performance and was assessed by area under the receiver operating characteristics (AUC).

RESULTS : The median age of the subjects in the test set was 61 (interquartile range: 39 to 79) years (51% male). The DL algorithm achieved an AUC of 0.81, sensitivity of 0.85, and specificity of 0.72 in detecting COVID-19 using RT-PCR as the reference standard. On subgroup analyses, the model achieved an AUC of 0.79, sensitivity of 0.80, and specificity of 0.74 in detecting COVID-19 in patients presented with fever or respiratory systems and an AUC of 0.87, sensitivity of 0.85, and specificity of 0.81 in distinguishing COVID-19 from other forms of pneumonia. The algorithm significantly outperforms human readers (P<0.001 using DeLong test) with higher sensitivity (P=0.01 using McNemar test).

CONCLUSIONS : A DL algorithm (COV19NET) for the detection of COVID-19 on chest radiographs can potentially be an effective tool in triaging patients, particularly in resource-stretched health-care systems.

Chiu Wan Hang Keith, Vardhanabhuti Varut, Poplavskiy Dmytro, Yu Philip Leung Ho, Du Richard, Yap Alistair Yun Hee, Zhang Sailong, Fong Ambrose Ho-Tung, Chin Thomas Wing-Yan, Lee Jonan Chun Yin, Leung Siu Ting, Lo Christine Shing Yen, Lui Macy Mei-Sze, Fang Benjamin Xin Hao, Ng Ming-Yen, Kuo Michael D

2020-Sep-22

General General

Rapid health data repository allocation using predictive machine learning.

In Health informatics journal ; h5-index 25.0

Health-related data is stored in a number of repositories that are managed and controlled by different entities. For instance, Electronic Health Records are usually administered by governments. Electronic Medical Records are typically controlled by health care providers, whereas Personal Health Records are managed directly by patients. Recently, Blockchain-based health record systems largely regulated by technology have emerged as another type of repository. Repositories for storing health data differ from one another based on cost, level of security and quality of performance. Not only has the type of repositories increased in recent years, but the quantum of health data to be stored has increased. For instance, the advent of wearable sensors that capture physiological signs has resulted in an exponential growth in digital health data. The increase in the types of repository and amount of data has driven a need for intelligent processes to select appropriate repositories as data is collected. However, the storage allocation decision is complex and nuanced. The challenges are exacerbated when health data are continuously streamed, as is the case with wearable sensors. Although patients are not always solely responsible for determining which repository should be used, they typically have some input into this decision. Patients can be expected to have idiosyncratic preferences regarding storage decisions depending on their unique contexts. In this paper, we propose a predictive model for the storage of health data that can meet patient needs and make storage decisions rapidly, in real-time, even with data streaming from wearable sensors. The model is built with a machine learning classifier that learns the mapping between characteristics of health data and features of storage repositories from a training set generated synthetically from correlations evident from small samples of experts. Results from the evaluation demonstrate the viability of the machine learning technique used.

Uddin Md Ashraf, Stranieri Andrew, Gondal Iqbal, Balasubramanian Venki

2020-Sep-24

Big Health data, Blockchain, classifier, deep learning, digital health record storage, electronic health record, machine learning, quality of performance, security and privacy, stream data

Radiology Radiology

Cancer genotypes prediction and associations analysis from imaging phenotypes: a survey on radiogenomics.

In Biomarkers in medicine

In this paper, we present a survey on the progress of radiogenomics research, which predicts cancer genotypes from imaging phenotypes and investigates the associations between them. First, we present an overview of the popular technology modalities for obtaining diagnostic medical images. Second, we summarize recently used methodologies for radiogenomics analysis, including statistical analysis, radiomics and deep learning. And then, we give a survey on the recent research based on several types of cancers. Finally, we discuss these studies and propose possible future research directions. In conclusion, we have identified strong correlations between cancer genotypes and imaging phenotypes. In addition, with the rapid growth of medical data, deep learning models show great application potential for radiogenomics.

Wang Yao, Wang Yan, Guo Chunjie, Xie Xuping, Liang Sen, Zhang Ruochi, Pang Wei, Huang Lan

2020-Aug

cancer genotypes, deep learning, imaging phenotype, prediction and associations analysis, radiogenomics, radiomics

General General

Current status and future perspective on artificial intelligence for lower endoscopy.

In Digestive endoscopy : official journal of the Japan Gastroenterological Endoscopy Society

The global incidence and mortality rate of colorectal cancer remains high. Colonoscopy is regarded as the gold standard examination for detecting and eradicating neoplastic lesion. However, there are some uncertainties in colonoscopy practice that are related to limitations in human performance. First, approximately one-fourth of colorectal neoplasms are missed on a single colonoscopy. Second, it is still difficult for non-experts to perform adequately regarding optical biopsy. Third, recording of some quality indicators (e.g. cecal intubation, bowel preparation, and withdrawal speed) which are related to adenoma detection rate, is sometimes incomplete. With recent improvements in machine learning techniques and advances in computer performance, artificial intelligence-assisted computer-aided diagnosis is being increasingly utilized by endoscopists. In particular, the emergence of deep-learning, data-driven machine learning techniques have made the development of computer-aided systems easier than that of conventional machine learning techniques, the former currently being considered the standard artificial intelligence engine of computer-aided diagnosis by colonoscopy. To date, computer-aided detection systems seem to have improved the rate of detection of neoplasms. Additionally, computer-aided characterization systems may have the potential to improve diagnostic accuracy in real-time clinical practice. Furthermore, some artificial intelligence-assisted systems that aim to improve the quality of colonoscopy have been reported. The implementation of computer-aided system clinical practice may provide additional benefits such as helping in educational poorly performing endoscopists and supporting real-time clinical decision making. In this review, we have focused on computer-aided diagnosis during colonoscopy reported by gastroenterologists and discussed its status, limitations, and future prospects.

Misawa Masashi, Kudo Shin-Ei, Mori Yuichi, Maeda Yasuharu, Ogawa Yushi, Ichimasa Katsuro, Kudo Toyoki, Wakamura Kunihiko, Hayashi Takemasa, Miyachi Hideyuki, Baba Toshiyuki, Ishida Fumio, Itoh Hayato, Oda Masahiro, Mori Kensaku

2020-Sep-23

Radiology Radiology

Automatic segmentation, classification and follow-up of optic pathway gliomas using deep learning and fuzzy c-means clustering based on MRI.

In Medical physics ; h5-index 59.0

PURPOSE : Optic pathway gliomas (OPG) are low-grade pilocytic astrocytomas accounting for 3-5% of pediatric intracranial tumors. Accurate and quantitative follow-up of OPG using MRI is crucial for therapeutic decision-making, yet is challenging due to the complex shape and heterogeneous tissue pattern which characterizes these tumors. The aim of this study was to implement automatic methods for segmentation and classification of OPG and its components, based on MRI.

METHODS : A total of 202 MRI scans from 29 patients with chiasmatic OPG scanned longitudinally were retrospectively collected and included in this study. Data included T2 and post-contrast T1 weighted images. The entire tumor volume and its components were manually annotated by a senior neuro-radiologist, and inter- and intra-rater variability of the entire tumor volume was assessed in a subset of scans. Automatic tumor segmentation was performed using deep-learning method with U-Net+ResNet architecture. A 5-fold cross-validation scheme was used to evaluate the automatic results relative to manual segmentation. Voxel based classification of the tumor into enhanced, non-enhanced and cystic components was performed using fuzzy c-means clustering.

RESULTS : The results of the automatic tumor segmentation were: mean dice score=0.736±0.025, precision=0.918±0.014, and recall=0.635±0.039 for the validation data, and dice score =0.761±0.011, precision=0.794±0.028, and recall=0.742±0.012 for the test data. The accuracy of the voxel based classification of tumor components was 0.94, with precision=0.89, 0.97, 0.85 and recall= 1.00, 0.79, 0.94 for the non-enhanced, enhanced and cystic components, respectively.

CONCLUSION : This study presents methods for automatic segmentation of chiasmatic OPG tumors and classification into the different components of the tumor, based on conventional MRI. Automatic quantitative longitudinal assessment of these tumors may improve radiological monitoring, facilitate early detection of disease progression and optimize therapy management.

Artzi Moran, Gershov Sapir, Ben-Sira Liat, Roth Jonathan, Kozyrev Danil, Shofty Ben, Gazit Tomer, Halag-Milo Tali, Constantini Shlomi, Ben Bashat Dafna

2020-Sep-23

Deep learning, Fuzzy c-means clustering, Optic pathway gliomas, Segmentation

General General

Applications of Genome-Wide Screening and Systems Biology Approaches in Drug Repositioning.

In Cancers

Modern drug discovery through de novo drug discovery entails high financial costs, low success rates, and lengthy trial periods. Drug repositioning presents a suitable approach for overcoming these issues by re-evaluating biological targets and modes of action of approved drugs. Coupling high-throughput technologies with genome-wide essentiality screens, network analysis, genome-scale metabolic modeling, and machine learning techniques enables the proposal of new drug-target signatures and uncovers unanticipated modes of action for available drugs. Here, we discuss the current issues associated with drug repositioning in light of curated high-throughput multi-omic databases, genome-wide screening technologies, and their application in systems biology/medicine approaches.

Mohammadi Elyas, Benfeitas Rui, Turkez Hasan, Boren Jan, Nielsen Jens, Uhlen Mathias, Mardinoglu Adil

2020-Sep-21

drug repositioning, genomic screens, machine learning, systems medicine, systems pharmacology

Radiology Radiology

Advancing COVID-19 differentiation with a robust preprocessing and integration of multi-institutional open-repository computer tomography datasets for deep learning analysis.

In Experimental and therapeutic medicine

The coronavirus pandemic and its unprecedented consequences globally has spurred the interest of the artificial intelligence research community. A plethora of published studies have investigated the role of imaging such as chest X-rays and computer tomography in coronavirus disease 2019 (COVID-19) automated diagnosis. Οpen repositories of medical imaging data can play a significant role by promoting cooperation among institutes in a world-wide scale. However, they may induce limitations related to variable data quality and intrinsic differences due to the wide variety of scanner vendors and imaging parameters. In this study, a state-of-the-art custom U-Net model is presented with a dice similarity coefficient performance of 99.6% along with a transfer learning VGG-19 based model for COVID-19 versus pneumonia differentiation exhibiting an area under curve of 96.1%. The above was significantly improved over the baseline model trained with no segmentation in selected tomographic slices of the same dataset. The presented study highlights the importance of a robust preprocessing protocol for image analysis within a heterogeneous imaging dataset and assesses the potential diagnostic value of the presented COVID-19 model by comparing its performance to the state of the art.

Trivizakis Eleftherios, Tsiknakis Nikos, Vassalou Evangelia E, Papadakis Georgios Z, Spandidos Demetrios A, Sarigiannis Dimosthenis, Tsatsakis Aristidis, Papanikolaou Nikolaos, Karantanas Apostolos H, Marias Kostas

2020-Nov

COVID-19, artificial intelligence, deep learning analysis, multi-institutional data

General General

Predicting Parkinson's Disease with Multimodal Irregularly Collected Longitudinal Smartphone Data

ArXiv Preprint

Parkinsons Disease is a neurological disorder and prevalent in elderly people. Traditional ways to diagnose the disease rely on in-person subjective clinical evaluations on the quality of a set of activity tests. The high-resolution longitudinal activity data collected by smartphone applications nowadays make it possible to conduct remote and convenient health assessment. However, out-of-lab tests often suffer from poor quality controls as well as irregularly collected observations, leading to noisy test results. To address these issues, we propose a novel time-series based approach to predicting Parkinson's Disease with raw activity test data collected by smartphones in the wild. The proposed method first synchronizes discrete activity tests into multimodal features at unified time points. Next, it distills and enriches local and global representations from noisy data across modalities and temporal observations by two attention modules. With the proposed mechanisms, our model is capable of handling noisy observations and at the same time extracting refined temporal features for improved prediction performance. Quantitative and qualitative results on a large public dataset demonstrate the effectiveness of the proposed approach.

Weijian Li, Wei Zhu, E. Ray Dorsey, Jiebo Luo

2020-09-25

Surgery Surgery

Empowering Caseworkers to Better Serve the Most Vulnerable with a Cloud-Based Care Management Solution.

In Applied clinical informatics ; h5-index 22.0

BACKGROUND :  Care-management tools are typically utilized for chronic disease management. Sonoma County government agencies employed advanced health information technologies, artificial intelligence (AI), and interagency process improvements to help transform health and health care for socially disadvantaged groups and other displaced individuals.

OBJECTIVES :  The objective of this case report is to describe how an integrated data hub and care-management solution streamlined care coordination of government services during a time of community-wide crisis.

METHODS :  This innovative application of care-management tools created a bridge between social and clinical determinants of health and used a three-step approach-access, collaboration, and innovation. The program Accessing Coordinated Care to Empower Self Sufficiency Sonoma was established to identify and match the most vulnerable residents with services to improve their well-being. Sonoma County created an Interdepartmental Multidisciplinary Team to deploy coordinated cross-departmental services (e.g., health and human services, housing services, probation) to support individuals experiencing housing insecurity. Implementation of a data integration hub (DIH) and care management and coordination system (CMCS) enabled integration of siloed data and services into a unified view of citizen status, identification of clinical and social determinants of health from structured and unstructured sources, and algorithms to match clients across systems.

RESULTS :  The integrated toolset helped 77 at-risk individuals in crisis through coordinated care plans and access to services in a time of need. Two case examples illustrate the specific care and services provided individuals with complex needs after the 2017 Sonoma County wildfires.

CONCLUSION :  Unique application of a care-management solution transformed health and health care for individuals fleeing from their homes and socially disadvantaged groups displaced by the Sonoma County wildfires. Future directions include expanding the DIH and CMCS to neighboring counties to coordinate care regionally. Such solutions might enable innovative care-management solutions across a variety of public, private, and nonprofit services.

Snowdon Jane L, Robinson Barbie, Staats Carolyn, Wolsey Kenneth, Sands-Lincoln Megan, Strasheim Thomas, Brotman David, Keating Katie, Schnitter Elizabeth, Jackson Gretchen, Kassler William

2020-Aug

General General

Artificial Intelligence-Assisted Colonoscopy for Detection of Colon Polyps: a Prospective, Randomized Cohort Study.

In Journal of gastrointestinal surgery : official journal of the Society for Surgery of the Alimentary Tract

BACKGROUND AND AIMS : Improving the rate of polyp detection is an important measure to prevent colorectal cancer (CRC). Real-time automatic polyp detection systems, through deep learning methods, can learn and perform specific endoscopic tasks previously performed by endoscopists. The purpose of this study was to explore whether a high-performance, real-time automatic polyp detection system could improve the polyp detection rate (PDR) in the actual clinical environment.

METHODS : The selected patients underwent same-day, back-to-back colonoscopies in a random order, with either traditional colonoscopy or artificial intelligence (AI)-assisted colonoscopy performed first by different experienced endoscopists (> 3000 colonoscopies). The primary outcome was the PDR. It was registered with clinicaltrials.gov . (NCT047126265).

RESULTS : In this study, we randomized 150 patients. The AI system significantly increased the PDR (34.0% vs 38.7%, p < 0.001). In addition, AI-assisted colonoscopy increased the detection of polyps smaller than 6 mm (69 vs 91, p < 0.001), but no difference was found with regard to larger lesions.

CONCLUSIONS : A real-time automatic polyp detection system can increase the PDR, primarily for diminutive polyps. However, a larger sample size is still needed in the follow-up study to further verify this conclusion.

TRIAL REGISTRATION : clinicaltrials.gov Identifier: NCT047126265.

Luo Yuchen, Zhang Yi, Liu Ming, Lai Yihong, Liu Panpan, Wang Zhen, Xing Tongyin, Huang Ying, Li Yue, Li Aiming, Wang Yadong, Luo Xiaobei, Liu Side, Han Zelong

2020-Sep-23

Artificial intelligence, Colonoscopy, Computer-aided diagnose

General General

An immune-related gene signature for determining Ewing sarcoma prognosis based on machine learning.

In Journal of cancer research and clinical oncology

PURPOSE : Ewing sarcoma (ES) is one of the most common malignant bone tumors in children and adolescents. The immune microenvironment plays an important role in the development of ES. Here, we developed an optimal signature for determining ES patient prognosis based on immune-related genes (IRGs).

METHODS : We analyzed the ES gene expression profile dataset, GSE17679, from the GEO database and extracted differential expressed IRGs (DEIRGs). Then, we conducted functional correlation and protein-protein interaction (PPI) analyses of the DEIRGs and used the machine learning algorithm-iterative Lasso Cox regression analysis to build an optimal DEIRG signature. In addition, we applied ES samples from the ICGC database to test the optimal gene signature. We performed univariate and multivariate Cox regressions on clinicopathological characteristics and optimal gene signature to evaluate whether signature is an important prognostic factor. Finally, we calculated the infiltration of 24 immune cells in ES using the ssGSEA algorithm, and analyzed the correlation between the DEIRGs in the optimal gene signature and immune cells.

RESULTS : A total of 249 DEIRGs were screened and an 11-gene signature with the strongest correlation with patient prognoses was analyzed using a machine learning algorithm. The 11-gene signature also had a high prognostic value in the ES external verification set. Univariate and multivariate Cox regression analyses showed that 11-gene signature is an independent prognostic factor. We found that macrophages and cytotoxic, CD8 T, NK, mast, B, NK CD56bright, TEM, TCM, and Th2 cells were significantly related to patient prognoses; the infiltration of cytotoxic and CD8 T cells in ES was significantly different. By correlating prognostic biomarkers with immune cell infiltration, we found that FABP4 and macrophages, and NDRG1 and Th2 cells had the strongest correlation.

CONCLUSION : Overall, the IRG-related 11-gene signature can be used as a reliable ES prognostic biomarker and can provide guidance for personalized ES therapy.

Ren En-Hui, Deng Ya-Jun, Yuan Wen-Hua, Wu Zuo-Long, Zhang Guang-Zhi, Xie Qi-Qi

2020-Sep-23

Ewing sarcoma, Immune cell infiltration, Iterative Lasso regression, Machine learning, Prognosis analysis

Internal Medicine Internal Medicine

Risk prediction of delirium in hospitalized patients using machine learning: An implementation and prospective evaluation study.

In Journal of the American Medical Informatics Association : JAMIA

OBJECTIVE : Machine learning models trained on electronic health records have achieved high prognostic accuracy in test datasets, but little is known about their embedding into clinical workflows. We implemented a random forest-based algorithm to identify hospitalized patients at high risk for delirium, and evaluated its performance in a clinical setting.

MATERIALS AND METHODS : Delirium was predicted at admission and recalculated on the evening of admission. The defined prediction outcome was a delirium coded for the recent hospital stay. During 7 months of prospective evaluation, 5530 predictions were analyzed. In addition, 119 predictions for internal medicine patients were compared with ratings of clinical experts in a blinded and nonblinded setting.

RESULTS : During clinical application, the algorithm achieved a sensitivity of 74.1% and a specificity of 82.2%. Discrimination on prospective data (area under the receiver-operating characteristic curve = 0.86) was as good as in the test dataset, but calibration was poor. The predictions correlated strongly with delirium risk perceived by experts in the blinded (r = 0.81) and nonblinded (r = 0.62) settings. A major advantage of our setting was the timely prediction without additional data entry.

DISCUSSION : The implemented machine learning algorithm achieved a stable performance predicting delirium in high agreement with expert ratings, but improvement of calibration is needed. Future research should evaluate the acceptance of implemented machine learning algorithms by health professionals.

CONCLUSIONS : Our study provides new insights into the implementation process of a machine learning algorithm into a clinical workflow and demonstrates its predictive power for delirium.

Jauk Stefanie, Kramer Diether, Großauer Birgit, Rienmüller Susanne, Avian Alexander, Berghold Andrea, Leodolter Werner, Schulz Stefan

2020-Sep-24

Machine learning, clinical decision support, delirium, electronic health records, prospective studies

General General

The 2019 National Natural language processing (NLP) Clinical Challenges (n2c2)/Open Health NLP (OHNLP) shared task on clinical concept normalization for clinical records.

In Journal of the American Medical Informatics Association : JAMIA

OBJECTIVE : The 2019 National Natural language processing (NLP) Clinical Challenges (n2c2)/Open Health NLP (OHNLP) shared task track 3, focused on medical concept normalization (MCN) in clinical records. This track aimed to assess the state of the art in identifying and matching salient medical concepts to a controlled vocabulary. In this paper, we describe the task, describe the data set used, compare the participating systems, present results, identify the strengths and limitations of the current state of the art, and identify directions for future research.

MATERIALS AND METHODS : Participating teams were provided with narrative discharge summaries in which text spans corresponding to medical concepts were identified. This paper refers to these text spans as mentions. Teams were tasked with normalizing these mentions to concepts, represented by concept unique identifiers, within the Unified Medical Language System. Submitted systems represented 4 broad categories of approaches: cascading dictionary matching, cosine distance, deep learning, and retrieve-and-rank systems. Disambiguation modules were common across all approaches.

RESULTS : A total of 33 teams participated in the MCN task. The best-performing team achieved an accuracy of 0.8526. The median and mean performances among all teams were 0.7733 and 0.7426, respectively.

CONCLUSIONS : Overall performance among the top 10 teams was high. However, several mention types were challenging for all teams. These included mentions requiring disambiguation of misspelled words, acronyms, abbreviations, and mentions with more than 1 possible semantic type. Also challenging were complex mentions of long, multi-word terms that may require new ways of extracting and representing mention meaning, the use of domain knowledge, parse trees, or hand-crafted rules.

Henry Sam, Wang Yanshan, Shen Feichen, Uzuner Ozlem

2020-Sep-24

clinical narratives, concept normalization, machine learning, natural language processing

General General

Graph-based regularization for regression problems with alignment and highly-correlated designs.

In SIAM journal on mathematics of data science

Sparse models for high-dimensional linear regression and machine learning have received substantial attention over the past two decades. Model selection, or determining which features or covariates are the best explanatory variables, is critical to the interpretability of a learned model. Much of the current literature assumes that covariates are only mildly correlated. However, in many modern applications covariates are highly correlated and do not exhibit key properties (such as the restricted eigenvalue condition, restricted isometry property, or other related assumptions). This work considers a high-dimensional regression setting in which a graph governs both correlations among the covariates and the similarity among regression coefficients - meaning there is alignment between the covariates and regression coefficients. Using side information about the strength of correlations among features, we form a graph with edge weights corresponding to pairwise covariances. This graph is used to define a graph total variation regularizer that promotes similar weights for correlated features. This work shows how the proposed graph-based regularization yields mean-squared error guarantees for a broad range of covariance graph structures. These guarantees are optimal for many specific covariance graphs, including block and lattice graphs. Our proposed approach outperforms other methods for highly-correlated design in a variety of experiments on synthetic data and real biochemistry data.

Li Yuan, Mark Benjamin, Raskutti Garvesh, Willett Rebecca, Song Hyebin, Neiman David

2020

General General

Gaussian Embedding for Large-scale Gene Set Analysis.

In Nature machine intelligence

Gene sets, including protein complexes and signaling pathways, have proliferated greatly, in large part as a result of high-throughput biological data. Leveraging gene sets to gain insight into biological discovery requires computational methods for converting them into a useful form for available machine learning models. Here, we study the problem of embedding gene sets as compact features that are compatible with available machine learning codes. We present Set2Gaussian, a novel network-based gene set embedding approach, which represents each gene set as a multivariate Gaussian distribution rather than a single point in the low-dimensional space, according to the proximity of these genes in a protein-protein interaction network. We demonstrate that Set2Gaussian improves gene set member identification, accurately stratifies tumors, and finds concise gene sets for gene set enrichment analysis. We further show how Set2Gaussian allows us to identify a previously unknown clinical prognostic and predictive subnetwork around NEFM in sarcoma, which we validate in independent cohorts.

Wang Sheng, Flynn Emily R, Altman Russ B

2020-Jul

General General

Simulating realistic fetal neurosonography images with appearance and growth change using cycle-consistent adversarial networks and an evaluation.

In Journal of medical imaging (Bellingham, Wash.)

Purpose: We present an original method for simulating realistic fetal neurosonography images specifically generating third-trimester pregnancy ultrasound images from second-trimester images. Our method was developed using unpaired data, as pairwise data were not available. We also report original insights on the general appearance differences between second- and third-trimester fetal head transventricular (TV) plane images. Approach: We design a cycle-consistent adversarial network (Cycle-GAN) to simulate visually realistic third-trimester images from unpaired second- and third-trimester ultrasound images. Simulation realism is evaluated qualitatively by experienced sonographers who blindly graded real and simulated images. A quantitative evaluation is also performed whereby a validated deep-learning-based image recognition algorithm (ScanNav®) acts as the expert reference to allow hundreds of real and simulated images to be automatically analyzed and compared efficiently. Results: Qualitative evaluation shows that the human expert cannot tell the difference between real and simulated third-trimester scan images. 84.2% of the simulated third-trimester images could not be distinguished from the real third-trimester images. As a quantitative baseline, on 3000 images, the visibility drop of the choroid, CSP, and mid-line falx between real second- and real third-trimester scans was computed by ScanNav® and found to be 72.5%, 61.5%, and 67%, respectively. The visibility drop of the same structures between real second-trimester and simulated third-trimester was found to be 77.5%, 57.7%, and 56.2%, respectively. Therefore, the real and simulated third-trimester images were consider to be visually similar to each other. Our evaluation also shows that the third-trimester simulation of a conventional GAN is much easier to distinguish, and the visibility drop of the structures is smaller than our proposed method. Conclusions: The results confirm that it is possible to simulate realistic third-trimester images from second-trimester images using a modified Cycle-GAN, which may be useful for deep learning researchers with a restricted availability of third-trimester scans but with access to ample second trimester images. We also show convincing simulation improvements, both qualitatively and quantitatively, using the Cycle-GAN method compared with a conventional GAN. Finally, the use of a machine learning-based reference (in the case ScanNav®) for large-scale quantitative image analysis evaluation is also a first to our knowledge.

Xu Yangdi, Lee Lok Hin, Drukker Lior, Yaqub Mohammad, Papageorghiou Aris T, Noble Alison J

2020-Sep

cycle-consistent adversarial network, quantitative evaluation, realistic simulation, second-trimester scan, third-trimester scan, transventricular plane

Radiology Radiology

Vessel wall MR imaging of intracranial atherosclerosis.

In Cardiovascular diagnosis and therapy

Intracranial atherosclerotic disease (ICAD) is one of the most common causes of ischemic stroke worldwide. Along with high recurrent stroke risk from ICAD, its association with cognitive decline and dementia leads to a substantial decrease in quality of life and a high economic burden. Atherosclerotic lesions can range from slight wall thickening with plaques that are angiographically occult to severely stenotic lesions. Recent advances in intracranial high resolution vessel wall MR (VW-MR) imaging have enabled imaging beyond the lumen to characterize the vessel wall and its pathology. This technique has opened new avenues of research for identifying vulnerable plaque in the setting of acute ischemic stroke as well as assessing ICAD burden and its associations with its sequela, such as dementia. We now understand more about the intracranial arterial wall, its ability to remodel with disease and how we can use VW-MR to identify angiographically occult lesions and assess medical treatment responses, for example, to statin therapy. Our growing understanding of ICAD with intracranial VW-MR imaging can profoundly impact diagnosis, therapy, and prognosis for ischemic stroke with the possibility of lesion-based risk models to tailor and personalize treatment. In this review, we discuss the advantages of intracranial VW-MR imaging for ICAD, the potential of bioimaging markers to identify vulnerable intracranial plaque, and future directions of artificial intelligence and its utility for lesion scoring and assessment.

Song Jae W, Wasserman Bruce A

2020-Aug

Black blood MR imaging, intracranial atherosclerosis, ischemic stroke, vessel wall MR imaging (VW-MR imaging)

Radiology Radiology

Cardiovascular/stroke risk predictive calculators: a comparison between statistical and machine learning models.

In Cardiovascular diagnosis and therapy

Background : Statistically derived cardiovascular risk calculators (CVRC) that use conventional risk factors, generally underestimate or overestimate the risk of cardiovascular disease (CVD) or stroke events primarily due to lack of integration of plaque burden. This study investigates the role of machine learning (ML)-based CVD/stroke risk calculators (CVRCML) and compares against statistically derived CVRC (CVRCStat) based on (I) conventional factors or (II) combined conventional with plaque burden (integrated factors).

Methods : The proposed study is divided into 3 parts: (I) statistical calculator: initially, the 10-year CVD/stroke risk was computed using 13 types of CVRCStat (without and with plaque burden) and binary risk stratification of the patients was performed using the predefined thresholds and risk classes; (II) ML calculator: using the same risk factors (without and with plaque burden), as adopted in 13 different CVRCStat, the patients were again risk-stratified using CVRCML based on support vector machine (SVM) and finally; (III) both types of calculators were evaluated using AUC based on ROC analysis, which was computed using combination of predicted class and endpoint equivalent to CVD/stroke events.

Results : An Institutional Review Board approved 202 patients (156 males and 46 females) of Japanese ethnicity were recruited for this study with a mean age of 69±11 years. The AUC for 13 different types of CVRCStat calculators were: AECRS2.0 (AUC 0.83, P<0.001), QRISK3 (AUC 0.72, P<0.001), WHO (AUC 0.70, P<0.001), ASCVD (AUC 0.67, P<0.001), FRScardio (AUC 0.67, P<0.01), FRSstroke (AUC 0.64, P<0.001), MSRC (AUC 0.63, P=0.03), UKPDS56 (AUC 0.63, P<0.001), NIPPON (AUC 0.63, P<0.001), PROCAM (AUC 0.59, P<0.001), RRS (AUC 0.57, P<0.001), UKPDS60 (AUC 0.53, P<0.001), and SCORE (AUC 0.45, P<0.001), while the AUC for the CVRCML with integrated risk factors (AUC 0.88, P<0.001), a 42% increase in performance. The overall risk-stratification accuracy for the CVRCML with integrated risk factors was 92.52% which was higher compared all the other CVRCStat.

Conclusions : ML-based CVD/stroke risk calculator provided a higher predictive ability of 10-year CVD/stroke compared to the 13 different types of statistically derived risk calculators including integrated model AECRS 2.0.

Jamthikar Ankush, Gupta Deep, Saba Luca, Khanna Narendra N, Araki Tadashi, Viskovic Klaudija, Mavrogeni Sophie, Laird John R, Pareek Gyan, Miner Martin, Sfikakis Petros P, Protogerou Athanasios, Viswanathan Vijay, Sharma Aditya, Nicolaides Andrew, Kitas George D, Suri Jasjit S

2020-Aug

10-year risk, Atherosclerosis, cardiovascular disease (CVD), integrated models, machine learning-based calculator, statistical risk calculator, stroke

General General

Classifications of Neurodegenerative Disorders Using a Multiplex Blood Biomarkers-Based Machine Learning Model.

In International journal of molecular sciences ; h5-index 102.0

Easily accessible biomarkers for Alzheimer's disease (AD), Parkinson's disease (PD), frontotemporal dementia (FTD), and related neurodegenerative disorders are urgently needed in an aging society to assist early-stage diagnoses. In this study, we aimed to develop machine learning algorithms using the multiplex blood-based biomarkers to identify patients with different neurodegenerative diseases. Plasma samples (n = 377) were obtained from healthy controls, patients with AD spectrum (including mild cognitive impairment (MCI)), PD spectrum with variable cognitive severity (including PD with dementia (PDD)), and FTD. We measured plasma levels of amyloid-beta 42 (Aβ42), Aβ40, total Tau, p-Tau181, and α-synuclein using an immunomagnetic reduction-based immunoassay. We observed increased levels of all biomarkers except Aβ40 in the AD group when compared to the MCI and controls. The plasma α-synuclein levels increased in PDD when compared to PD with normal cognition. We applied machine learning-based frameworks, including a linear discriminant analysis (LDA), for feature extraction and several classifiers, using features from these blood-based biomarkers to classify these neurodegenerative disorders. We found that the random forest (RF) was the best classifier to separate different dementia syndromes. Using RF, the established LDA model had an average accuracy of 76% when classifying AD, PD spectrum, and FTD. Moreover, we found 83% and 63% accuracies when differentiating the individual disease severity of subgroups in the AD and PD spectrum, respectively. The developed LDA model with the RF classifier can assist clinicians in distinguishing variable neurodegenerative disorders.

Lin Chin-Hsien, Chiu Shu-I, Chen Ta-Fu, Jang Jyh-Shing Roger, Chiu Ming-Jang

2020-Sep-21

Alzheimer’s disease, Parkinson’s disease, biomarkers, classification, deep learning model, frontotemporal dementia, linear discriminant analysis, multivariate imputation by chained equations, neurodegenerative disorders

General General

Pandemic number five - Latest insights into the COVID-19 crisis.

In Biomedical journal

About nine months after the emergence of SARS-CoV-2, this special issue of the Biomedical Journal takes stock of its evolution into a pandemic. We acquire an elaborate overview of the history and virology of SARS-CoV-2, the epidemiology of COVID-19, and the development of therapies and vaccines, based on useful tools such as a pseudovirus system, artificial intelligence, and repurposing of existing drugs. Moreover, we learn about a potential link between COVID-19 and oral health, and some of the strategies that allowed Taiwan to handle the outbreak exceptionally well, including a COVID-19 biobank establishment, online tools for contact tracing, and the efficient management of emergency departments.

Häfner Sophia Julia

2020-Aug-27

COVID-19, Contact tracing, Pseudovirus system, Repurposing drugs, SARS-CoV-2

Ophthalmology Ophthalmology

DPN: Detail-Preserving Network with High Resolution Representation for Efficient Segmentation of Retinal Vessels

ArXiv Preprint

Retinal vessels are important biomarkers for many ophthalmological and cardiovascular diseases. It is of great significance to develop an accurate and fast vessel segmentation model for computer-aided diagnosis. Existing methods, such as U-Net follows the encoder-decoder pipeline, where detailed information is lost in the encoder in order to achieve a large field of view. Although detailed information could be recovered in the decoder via multi-scale fusion, it still contains noise. In this paper, we propose a deep segmentation model, called detail-preserving network (DPN) for efficient vessel segmentation. To preserve detailed spatial information and learn structural information at the same time, we designed the detail-preserving block (DP-Block). Further, we stacked eight DP-Blocks together to form the DPN. More importantly, there are no down-sampling operations among these blocks. As a result, the DPN could maintain a high resolution during the processing, which is helpful to locate the boundaries of thin vessels. To illustrate the effectiveness of our method, we conducted experiments over three public datasets. Experimental results show, compared to state-of-the-art methods, our method shows competitive/better performance in terms of segmentation accuracy, segmentation speed, extensibility and the number of parameters. Specifically, 1) the AUC of our method ranks first/second/third on the STARE/CHASE_DB1/DRIVE datasets, respectively. 2) Only one forward pass is required of our method to generate a vessel segmentation map, and the segmentation speed of our method is over 20-160x faster than other methods on the DRIVE dataset. 3) We conducted cross-training experiments to demonstrate the extensibility of our method, and results revealed that our method shows superior performance. 4) The number of parameters of our method is only around 96k, less then all comparison methods.

Song Guo

2020-09-25

Cardiology Cardiology

A comparison of artificial intelligence-based algorithms for the identification of patients with depressed right ventricular function from 2-dimentional echocardiography parameters and clinical features.

In Cardiovascular diagnosis and therapy

Background : Recognizing low right ventricular (RV) function from 2-dimentiontial echocardiography (2D-ECHO) is challenging when parameters are contradictory. We aim to develop a model to predict low RV function integrating the various 2D-ECHO parameters in reference to cardiac magnetic resonance (CMR)-the gold standard.

Methods : We retrospectively identified patients who underwent a 2D-ECHO and a CMR within 3 months of each other at our institution (American University of Beirut Medical Center). We extracted three parameters (TAPSE, S' and FACRV) that are classically used to assess RV function. We have assessed the ability of 2D-ECHO derived parameters and clinical features to predict RV function measured by the gold standard CMR. We compared outcomes from four machine learning algorithms, widely used in the biomedical community to solve classification problems.

Results : One hundred fifty-five patients were identified and included in our study. Average age was 43±17.1 years old and 52/156 (33.3%) were females. According to CMR, 21 patients were identified to have RV dysfunction, with an RVEF of 34.7%±6.4%, as opposed to 54.7%±6.7% in the normal RV population (P<0.0001). The Random Forest model was able to detect low RV function with an AUC =0.80, while general linear regression performed poorly in our population with an AUC of 0.62.

Conclusions : In this study, we trained and validated an ML-based algorithm that could detect low RV function from clinical and 2D-ECHO parameters. The algorithm has two advantages: first, it performed better than general linear regression, and second, it integrated the various 2D-ECHO parameters.

Ahmad Ali, Ibrahim Zahi, Sakr Georges, El-Bizri Abdallah, Masri Lara, Elhajj Imad H, El-Hachem Nehme, Isma’eel Hussain

2020-Aug

2D-ECHO, CMR, RV function, machine learning

Radiology Radiology

Machine learning-based CT fractional flow reserve assessment in acute chest pain: first experience.

In Cardiovascular diagnosis and therapy

Background : Computed tomography (CT)-derived fractional flow reserve (FFRCT) enables the non-invasive functional assessment of coronary artery stenosis. We evaluated the feasibility and potential clinical role of FFRCT in patients presenting to the emergency department with acute chest pain who underwent chest-pain CT (CPCT).

Methods : For this retrospective IRB-approved study, we included 56 patients (median age: 62 years, 14 females) with acute chest pain who underwent CPCT and who had at least a mild (≥25% diameter) coronary artery stenosis. CPCT was evaluated for the presence of acute plaque rupture and vulnerable plaque features. FFRCT measurements were performed using a machine learning-based software. We assessed the agreement between the results from FFRCT and patient outcome (including results from invasive catheter angiography and from any non-invasive cardiac imaging test, final clinical diagnosis and revascularization) for a follow-up of 3 months.

Results : FFRCT was technically feasible in 38/56 patients (68%). Eleven of the 38 patients (29%) showed acute plaque rupture in CPCT; all of them underwent immediate coronary revascularization. Of the remaining 27 patients (71%), 16 patients showed vulnerable plaque features (59%), of whom 11 (69%) were diagnosed with acute coronary syndrome (ACS) and 10 (63%) underwent coronary revascularization. In patients with vulnerable plaque features in CPCT, FFRCT had an agreement with outcome in 12/16 patients (75%). In patients without vulnerable plaque features (n=11), one patient showed myocardial ischemia (9%). In these patients, FFRCT and patient outcome showed an agreement in 10/11 patients (91%).

Conclusions : Our preliminary data show that FFRCT is feasible in patients with acute chest pain who undergo CPCT provided that image quality is sufficient. FFRCT has the potential to improve patient triage by reducing further downstream testing but appears of limited value in patients with CT signs of acute plaque rupture.

Eberhard Matthias, Nadarevic Tin, Cousin Andrej, von Spiczak Jochen, Hinzpeter Ricarda, Euler Andre, Morsbach Fabian, Manka Robert, Keller Dagmar I, Alkadhi Hatem

2020-Aug

Acute coronary syndrome (ACS), computed tomography angiography, fractional flow reserve, machine learning, myocardial

General General

Insight into glycogen synthase kinase-3β inhibitory activity of phyto-constituents from Melissa officinalis: in silico studies.

In In silico pharmacology

Over activity of Glycogen synthase kinase-3β (GSK-3β), a serine/threonine-protein kinase has been implicated in a number of diseases including stroke, type II diabetes and Alzheimer disease (AD). This study aimed to find novel inhibitors of GSK-3β from phyto-constituents of Melissa officinalis with the aid of computational analysis. Molecular docking, induced-fit docking (IFD), calculation of binding free energy via the MM-GBSA approach and Lipinski's rule of five (RO5) were employed to filter the compounds and determine their druggability. Most importantly, the compounds pIC50 were predicted by machine learning-based model generated by AutoQSAR algorithm. The generated model was validated to affirm its predictive model. The best model obtained was Model kpls_desc_38 (R2 = 0.8467 and Q2 = 0.8069), and this external validated model was utilized to predict the bioactivities of the lead compounds. While a number of characterized compounds from Melissa officinalis showed better docking score, binding free energy alongside adherence to RO5 than co-cystallized ligand, only three compounds (salvianolic acid C, ellagic acid and naringenin) showed more satisfactory pIC50. The results obtained in this study can be useful to design potent inhibitors of GSK-3β.

Iwaloye Opeyemi, Elekofehinti Olusola Olalekan, Oluwarotimi Emmanuel Ayo, Kikiowo Babatom Iwa, Fadipe Toyin Mary

2020

AutoQSAR, Glycogen synthase kinase-3β, Induced-fit docking (IFD), MM-GBSA, Melissa officinalis

General General

Listening forward: approaching marine biodiversity assessments using acoustic methods.

In Royal Society open science

Ecosystems and the communities they support are changing at alarmingly rapid rates. Tracking species diversity is vital to managing these stressed habitats. Yet, quantifying and monitoring biodiversity is often challenging, especially in ocean habitats. Given that many animals make sounds, these cues travel efficiently under water, and emerging technologies are increasingly cost-effective, passive acoustics (a long-standing ocean observation method) is now a potential means of quantifying and monitoring marine biodiversity. Properly applying acoustics for biodiversity assessments is vital. Our goal here is to provide a timely consideration of emerging methods using passive acoustics to measure marine biodiversity. We provide a summary of the brief history of using passive acoustics to assess marine biodiversity and community structure, a critical assessment of the challenges faced, and outline recommended practices and considerations for acoustic biodiversity measurements. We focused on temperate and tropical seas, where much of the acoustic biodiversity work has been conducted. Overall, we suggest a cautious approach to applying current acoustic indices to assess marine biodiversity. Key needs are preliminary data and sampling sufficiently to capture the patterns and variability of a habitat. Yet with new analytical tools including source separation and supervised machine learning, there is substantial promise in marine acoustic diversity assessment methods.

Mooney T Aran, Di Iorio Lucia, Lammers Marc, Lin Tzu-Hao, Nedelec Sophie L, Parsons Miles, Radford Craig, Urban Ed, Stanley Jenni

2020-Aug

bioacoustics, ecosystem health, richness, soundscape

Radiology Radiology

Advancing COVID-19 differentiation with a robust preprocessing and integration of multi-institutional open-repository computer tomography datasets for deep learning analysis.

In Experimental and therapeutic medicine

The coronavirus pandemic and its unprecedented consequences globally has spurred the interest of the artificial intelligence research community. A plethora of published studies have investigated the role of imaging such as chest X-rays and computer tomography in coronavirus disease 2019 (COVID-19) automated diagnosis. Οpen repositories of medical imaging data can play a significant role by promoting cooperation among institutes in a world-wide scale. However, they may induce limitations related to variable data quality and intrinsic differences due to the wide variety of scanner vendors and imaging parameters. In this study, a state-of-the-art custom U-Net model is presented with a dice similarity coefficient performance of 99.6% along with a transfer learning VGG-19 based model for COVID-19 versus pneumonia differentiation exhibiting an area under curve of 96.1%. The above was significantly improved over the baseline model trained with no segmentation in selected tomographic slices of the same dataset. The presented study highlights the importance of a robust preprocessing protocol for image analysis within a heterogeneous imaging dataset and assesses the potential diagnostic value of the presented COVID-19 model by comparing its performance to the state of the art.

Trivizakis Eleftherios, Tsiknakis Nikos, Vassalou Evangelia E, Papadakis Georgios Z, Spandidos Demetrios A, Sarigiannis Dimosthenis, Tsatsakis Aristidis, Papanikolaou Nikolaos, Karantanas Apostolos H, Marias Kostas

2020-Nov

COVID-19, artificial intelligence, deep learning analysis, multi-institutional data

General General

A Comparison of Random Forest Variable Selection Methods for Classification Prediction Modeling.

In Expert systems with applications

Random forest classification is a popular machine learning method for developing prediction models in many research settings. Often in prediction modeling, a goal is to reduce the number of variables needed to obtain a prediction in order to reduce the burden of data collection and improve efficiency. Several variable selection methods exist for the setting of random forest classification; however, there is a paucity of literature to guide users as to which method may be preferable for different types of datasets. Using 311 classification datasets freely available online, we evaluate the prediction error rates, number of variables, computation times and area under the receiver operating curve for many random forest variable selection methods. We compare random forest variable selection methods for different types of datasets (datasets with binary outcomes, datasets with many predictors, and datasets with imbalanced outcomes) and for different types of methods (standard random forest versus conditional random forest methods and test based versus performance based methods). Based on our study, the best variable selection methods for most datasets are Jiang's method and the method implemented in the VSURF R package. For datasets with many predictors, the methods implemented in the R packages varSelRF and Boruta are preferable due to computational efficiency. A significant contribution of this study is the ability to assess different variable selection techniques in the setting of random forest classification in order to identify preferable methods based on applications in expert and intelligent systems.

Speiser Jaime Lynn, Miller Michael E, Tooze Janet, Ip Edward

2019-Nov-15

classification, feature reduction, random forest, variable selection

General General

Third-order nanocircuit elements for neuromorphic engineering.

In Nature ; h5-index 368.0

Current hardware approaches to biomimetic or neuromorphic artificial intelligence rely on elaborate transistor circuits to simulate biological functions. However, these can instead be more faithfully emulated by higher-order circuit elements that naturally express neuromorphic nonlinear dynamics1-4. Generating neuromorphic action potentials in a circuit element theoretically requires a minimum of third-order complexity (for example, three dynamical electrophysical processes)5, but there have been few examples of second-order neuromorphic elements, and no previous demonstration of any isolated third-order element6-8. Using both experiments and modelling, here we show how multiple electrophysical processes-including Mott transition dynamics-form a nanoscale third-order circuit element. We demonstrate simple transistorless networks of third-order elements that perform Boolean operations and find analogue solutions to a computationally hard graph-partitioning problem. This work paves a way towards very compact and densely functional neuromorphic computing primitives, and energy-efficient validation of neuroscientific models.

Kumar Suhas, Williams R Stanley, Wang Ziwen

2020-Sep

Radiology Radiology

Intensity harmonization techniques influence radiomics features and radiomics-based predictions in sarcoma patients.

In Scientific reports ; h5-index 158.0

Intensity harmonization techniques (IHT) are mandatory to homogenize multicentric MRIs before any quantitative analysis because signal intensities (SI) do not have standardized units. Radiomics combine quantification of tumors' radiological phenotype with machine-learning to improve predictive models, such as metastastic-relapse-free survival (MFS) for sarcoma patients. We post-processed the initial T2-weighted-imaging of 70 sarcoma patients by using 5 IHTs and extracting 45 radiomics features (RFs), namely: classical standardization (IHTstd), standardization per adipose tissue SIs (IHTfat), histogram-matching with a patient histogram (IHTHM.1), with the average histogram of the population (IHTHM.All) and plus ComBat method (IHTHM.All.C), which provided 5 radiomics datasets in addition to the original radiomics dataset without IHT (No-IHT). We found that using IHTs significantly influenced all RFs values (p-values: < 0.0001-0.02). Unsupervised clustering performed on each radiomics dataset showed that only clusters from the No-IHT, IHTstd, IHTHM.All, and IHTHM.All.C datasets significantly correlated with MFS in multivariate Cox models (p = 0.02, 0.007, 0.004 and 0.02, respectively). We built radiomics-based supervised models to predict metastatic relapse at 2-years with a training set of 50 patients. The models performances varied markedly depending on the IHT in the validation set (range of AUROC from 0.688 with IHTstd to 0.823 with IHTHM.1). Hence, the use of intensity harmonization and the related technique should be carefully detailed in radiomics post-processing pipelines as it can profoundly affect the reproducibility of analyses.

Crombé Amandine, Kind Michèle, Fadli David, Le Loarer François, Italiano Antoine, Buy Xavier, Saut Olivier

2020-Sep-23

Pathology Pathology

Automated thermal imaging for the detection of fatty liver disease.

In Scientific reports ; h5-index 158.0

Non-alcoholic fatty liver disease (NAFLD) comprises a spectrum of progressive liver pathologies, ranging from simple steatosis to non-alcoholic steatohepatitis (NASH), fibrosis and cirrhosis. A liver biopsy is currently required to stratify high-risk patients, and predicting the degree of liver inflammation and fibrosis using non-invasive tests remains challenging. Here, we sought to develop a novel, cost-effective screening tool for NAFLD based on thermal imaging. We used a commercially available and non-invasive thermal camera and developed a new image processing algorithm to automatically predict disease status in a small animal model of fatty liver disease. To induce liver steatosis and inflammation, we fed C57/black female mice (8 weeks old) a methionine-choline deficient diet (MCD diet) for 6 weeks. We evaluated structural and functional liver changes by serial ultrasound studies, histopathological analysis, blood tests for liver enzymes and lipids, and measured liver inflammatory cell infiltration by flow cytometry. We developed an image processing algorithm that measures relative spatial thermal variation across the skin covering the liver. Thermal parameters including temperature variance, homogeneity levels and other textural features were fed as input to a t-SNE dimensionality reduction algorithm followed by k-means clustering. During weeks 3,4, and 5 of the experiment, our algorithm demonstrated a 100% detection rate and classified all mice correctly according to their disease status. Direct thermal imaging of the liver confirmed the presence of changes in surface thermography in diseased livers. We conclude that non-invasive thermal imaging combined with advanced image processing and machine learning-based analysis successfully correlates surface thermography with liver steatosis and inflammation in mice. Future development of this screening tool may improve our ability to study, diagnose and treat liver disease.

Brzezinski Rafael Y, Levin-Kotler Lapaz, Rabin Neta, Ovadia-Blechman Zehava, Zimmer Yair, Sternfeld Adi, Finchelman Joanna Molad, Unis Razan, Lewis Nir, Tepper-Shaihov Olga, Naftali-Shani Nili, Balint-Lahat Nora, Safran Michal, Ben-Ari Ziv, Grossman Ehud, Leor Jonathan, Hoffer Oshrit

2020-Sep-23

General General

Machine learning-driven electronic identifications of single pathogenic bacteria.

In Scientific reports ; h5-index 158.0

A rapid method for screening pathogens can revolutionize health care by enabling infection control through medication before symptom. Here we report on label-free single-cell identifications of clinically-important pathogenic bacteria by using a polymer-integrated low thickness-to-diameter aspect ratio pore and machine learning-driven resistive pulse analyses. A high-spatiotemporal resolution of this electrical sensor enabled to observe galvanotactic response intrinsic to the microbes during their translocation. We demonstrated discrimination of the cellular motility via signal pattern classifications in a high-dimensional feature space. As the detection-to-decision can be completed within milliseconds, the present technique may be used for real-time screening of pathogenic bacteria for environmental and medical applications.

Hattori Shota, Sekido Rintaro, Leong Iat Wai, Tsutsui Makusu, Arima Akihide, Tanaka Masayoshi, Yokota Kazumichi, Washio Takashi, Kawai Tomoji, Okochi Mina

2020-Sep-23

Public Health Public Health

The Effects of Obesity-Related Anthropometric Factors on Cardiovascular Risks of Homeless Adults in Taiwan.

In International journal of environmental research and public health ; h5-index 73.0

Homelessness is a pre-existing phenomenon in society and an important public health issue that national policy strives to solve. Cardiovascular disease (CVD) is an important health problem of the homeless. This cross-sectional study explored the effects of four obesity-related anthropometric factors-body mass index (BMI), waist circumference (WC), waist-to-hip ratio (WHR), and waist-to-height ratio (WHtR)-on cardiovascular disease risks (expressed by three CVD markers: hypertension, hyperglycemia, and hyperlipidemia) among homeless adults in Taipei and compared the relevant results with ordinary adults in Taiwan. The research team sampled homeless adults over the age of 20 in Taipei City in 2018 and collected 297 participants. Through anthropometric measurements, blood pressure measurements, and blood tests, we calculated the obesity-related indicators of the participants and found those at risks of cardiovascular disease. The results showed that the prevalence of hypertension, hyperglycemia, and hyperlipidemia in homeless adults was significantly higher than that of ordinary adults in Taiwan. Among the four obesity-related indicators, WHtR showed the strongest association with the prevalence of hypertension and hyperlipidemia, followed by WHR, both of which showed stronger association than traditional WC and BMI indicators. It can be inferred that abdominal obesity characterized by WHtR is a key risk factor for hypertension and hyperlipidemia in homeless adults in Taiwan. We hope that the results will provide medical clinical references and effectively warn of cardiovascular disease risks for the homeless in Taiwan.

Chen Ching-Lin, Chen Mingchih, Liu Chih-Kuang

2020-Sep-18

BMI, WC, WHR, WHtR, cardiovascular risk, homeless adults

General General

Review: Application and Prospective Discussion of Machine Learning for the Management of Dairy Farms.

In Animals : an open access journal from MDPI

Dairy farmers use herd management systems, behavioral sensors, feeding lists, breeding schedules, and health records to document herd characteristics. Consequently, large amounts of dairy data are becoming available. However, a lack of data integration makes it difficult for farmers to analyze the data on their dairy farm, which indicates that these data are currently not being used to their full potential. Hence, multiple issues in dairy farming such as low longevity, poor performance, and health issues remain. We aimed to evaluate whether machine learning (ML) methods can solve some of these existing issues in dairy farming. This review summarizes peer-reviewed ML papers published in the dairy sector between 2015 and 2020. Ultimately, 97 papers from the subdomains of management, physiology, reproduction, behavior analysis, and feeding were considered in this review. The results confirm that ML algorithms have become common tools in most areas of dairy research, particularly to predict data. Despite the quantity of research available, most tested algorithms have not performed sufficiently for a reliable implementation in practice. This may be due to poor training data. The availability of data resources from multiple farms covering longer periods would be useful to improve prediction accuracies. In conclusion, ML is a promising tool in dairy research, which could be used to develop and improve decision support for farmers. As the cow is a multifactorial system, ML algorithms could analyze integrated data sources that describe and ultimately allow managing cows according to all relevant influencing factors. However, both the integration of multiple data sources and the obtainability of public data currently remain challenging.

Cockburn Marianne

2020-Sep-18

big data, cluster, data analysis, data integration, sensor, smart farming

Public Health Public Health

Environmental Health Surveillance System for a Population Using Advanced Exposure Assessment.

In Toxics

Human exposure to air pollution is a major public health concern. Environmental policymakers have been implementing various strategies to reduce exposure, including the 10th-day-no-driving system. To assess exposure of an entire population of a community in a highly polluted area, pollutant concentrations in microenvironments and population time-activity patterns are required. To date, population exposure to air pollutants has been assessed using air monitoring data from fixed atmospheric monitoring stations, atmospheric dispersion modeling, or spatial interpolation techniques for pollutant concentrations. This is coupled with census data, administrative registers, and data on the patterns of the time-based activities at the individual scale. Recent technologies such as sensors, the Internet of Things (IoT), communications technology, and artificial intelligence enable the accurate evaluation of air pollution exposure for a population in an environmental health context. In this study, the latest trends in published papers on the assessment of population exposure to air pollution were reviewed. Subsequently, this study proposes a methodology that will enable policymakers to develop an environmental health surveillance system that evaluates the distribution of air pollution exposure for a population within a target area and establish countermeasures based on advanced exposure assessment.

Yang Wonho, Park Jinhyeon, Cho Mansu, Lee Cheolmin, Lee Jeongil, Lee Chaekwan

2020-Sep-18

air pollution, environmental health surveillance system, exposure assessment, population exposure

General General

A Statistical Analysis of Risk Factors and Biological Behavior in Canine Mammary Tumors: A Multicenter Study.

In Animals : an open access journal from MDPI

Canine mammary tumors (CMTs) represent a serious issue in worldwide veterinary practice and several risk factors are variably implicated in the biology of CMTs. The present study examines the relationship between risk factors and histological diagnosis of a large CMT dataset from three academic institutions by classical statistical analysis and supervised machine learning methods. Epidemiological, clinical, and histopathological data of 1866 CMTs were included. Dogs with malignant tumors were significantly older than dogs with benign tumors (9.6 versus 8.7 years, P < 0.001). Malignant tumors were significantly larger than benign counterparts (2.69 versus 1.7 cm, P < 0.001). Interestingly, 18% of malignant tumors were smaller than 1 cm in diameter, providing compelling evidence that the size of the tumor should be reconsidered during the assessment of the TNM-WHO clinical staging. The application of the logistic regression and the machine learning model identified the age and the tumor's size as the best predictors with an overall diagnostic accuracy of 0.63, suggesting that these risk factors are sufficient but not exhaustive indicators of the malignancy of CMTs. This multicenter study increases the general knowledge of the main epidemiologica-clinical risk factors involved in the onset of CMTs and paves the way for further investigations of these factors in association with CMTs and in the application of machine learning technology.

Burrai Giovanni P, Gabrieli Andrea, Moccia Valentina, Zappulli Valentina, Porcellato Ilaria, Brachelente Chiara, Pirino Salvatore, Polinas Marta, Antuofermo Elisabetta

2020-Sep-18

age, breed, dogs, machine learning, mammary tumor size, reproductive and hormonal status

General General

Data-Driven Molecular Dynamics: A Multifaceted Challenge.

In Pharmaceuticals (Basel, Switzerland)

The big data concept is currently revolutionizing several fields of science including drug discovery and development. While opening up new perspectives for better drug design and related strategies, big data analysis strongly challenges our current ability to manage and exploit an extraordinarily large and possibly diverse amount of information. The recent renewal of machine learning (ML)-based algorithms is key in providing the proper framework for addressing this issue. In this respect, the impact on the exploitation of molecular dynamics (MD) simulations, which have recently reached mainstream status in computational drug discovery, can be remarkable. Here, we review the recent progress in the use of ML methods coupled to biomolecular simulations with potentially relevant implications for drug design. Specifically, we show how different ML-based strategies can be applied to the outcome of MD simulations for gaining knowledge and enhancing sampling. Finally, we discuss how intrinsic limitations of MD in accurately modeling biomolecular systems can be alleviated by including information coming from experimental data.

Bernetti Mattia, Bertazzo Martina, Masetti Matteo

2020-Sep-18

Markov state models, collective variables, dimensionality reduction, experimental data, machine learning, maximum entropy principle, reaction coordinates

Internal Medicine Internal Medicine

Employing computational linguistics techniques to identify limited patient health literacy: Findings from the ECLIPPSE study.

In Health services research

OBJECTIVE : To develop novel, scalable, and valid literacy profiles for identifying limited health literacy patients by harnessing natural language processing.

DATA SOURCE : With respect to the linguistic content, we analyzed 283 216 secure messages sent by 6941 diabetes patients to physicians within an integrated system's electronic portal. Sociodemographic, clinical, and utilization data were obtained via questionnaire and electronic health records.

STUDY DESIGN : Retrospective study used natural language processing and machine learning to generate five unique "Literacy Profiles" by employing various sets of linguistic indices: Flesch-Kincaid (LP_FK); basic indices of writing complexity, including lexical diversity (LP_LD) and writing quality (LP_WQ); and advanced indices related to syntactic complexity, lexical sophistication, and diversity, modeled from self-reported (LP_SR), and expert-rated (LP_Exp) health literacy. We first determined the performance of each literacy profile relative to self-reported and expert-rated health literacy to discriminate between high and low health literacy and then assessed Literacy Profiles' relationships with known correlates of health literacy, such as patient sociodemographics and a range of health-related outcomes, including ratings of physician communication, medication adherence, diabetes control, comorbidities, and utilization.

PRINCIPAL FINDINGS : LP_SR and LP_Exp performed best in discriminating between high and low self-reported (C-statistics: 0.86 and 0.58, respectively) and expert-rated health literacy (C-statistics: 0.71 and 0.87, respectively) and were significantly associated with educational attainment, race/ethnicity, Consumer Assessment of Provider and Systems (CAHPS) scores, adherence, glycemia, comorbidities, and emergency department visits.

CONCLUSIONS : Since health literacy is a potentially remediable explanatory factor in health care disparities, the development of automated health literacy indicators represents a significant accomplishment with broad clinical and population health applications. Health systems could apply literacy profiles to efficiently determine whether quality of care and outcomes vary by patient health literacy; identify at-risk populations for targeting tailored health communications and self-management support interventions; and inform clinicians to promote improvements in individual-level care.

Schillinger Dean, Balyan Renu, Crossley Scott A, McNamara Danielle S, Liu Jennifer Y, Karter Andrew J

2020-Sep-23

communication, diabetes, health literacy, machine learning, managed care, natural language processing, secure messaging

General General

Knockoff Boosted Tree for Model-Free Variable Selection.

In Bioinformatics (Oxford, England)

MOTIVATION : The recently proposed knockoff filter is a general framework for controlling the false discovery rate when performing variable selection. This powerful new approach generates a "knockoff" of each variable tested for exact false discovery rate control. Imitation variables that mimic the correlation structure found within the original variables serve as negative controls for statistical inference. Current applications of knockoff methods use linear regression models and conduct variable selection only for variables existing in model functions. Here, we extend the use of knockoffs for machine learning with boosted trees, which are successful and widely used in problems where no prior knowledge of model function is required. However, currently available importance scores in tree models are insufficient for variable selection with false discovery rate control.

RESULTS : We propose a novel strategy for conducting variable selection without prior model topology knowledge using the knockoff method with boosted tree models. We extend the current knockoff method to model-free variable selection through the use of tree-based models. Additionally, we propose and evaluate two new sampling methods for generating knockoffs, namely the sparse covariance and principal component knockoff methods. We test and compare these methods with the original knockoff method regarding their ability to control type I errors and power. In simulation tests, we compare the properties and performance of importance test statistics of tree models. The results include different combinations of knockoffs and importance test statistics. We consider scenarios that include main-effect, interaction, exponential, and second-order models while assuming the true model structures are unknown. We apply our algorithm for tumor purity estimation and tumor classification using Cancer Genome Atlas (TCGA) gene expression data. Our results show improved discrimination between difficult-to-discriminate cancer types.

AVAILABILITY AND IMPLEMENTATION : The proposed algorithm is included in the KOBT package, which is available at https://cran.r-project.org/web/packages/KOBT/index.html.

SUPPLEMENTARY INFORMATION : Supplementary data are available at Bioinformatics online.

Jiang Tao, Li Yuanyuan, Motsinger-Reif Alison A

2020-Sep-23

General General

Establishing the accuracy of density functional approaches for the description of noncovalent interactions in biomolecules.

In Physical chemistry chemical physics : PCCP

Biomolecules have complex structures, and noncovalent interactions are crucial to determine their conformations and functionalities. It is therefore critical to be able to describe them in an accurate but efficient manner in these systems. In this context density functional theory (DFT) could provide a powerful tool to simulate biological matter either directly for relatively simple systems or coupled with classical simulations like the QM/MM (quantum mechanics/molecular mechanics) approach. Additionally, DFT could play a fundamental role to fit the parameters of classical force fields or to train machine learning potentials to perform large scale molecular dynamics simulations of biological systems. Yet, local or semi-local approximations used in DFT cannot describe van der Waals (vdW) interactions, one of the essential noncovalent interactions in biomolecules, since they lack a proper description of long range correlation effects. However, many efficient and reasonably accurate methods are now available for the description of van der Waals interactions within DFT. In this work, we establish the accuracy of several state-of-the-art vdW-aware functionals by considering 275 biomolecules including interacting DNA and RNA bases, peptides and biological inhibitors and compare our results for the energy with highly accurate wavefunction based calculations. Most methods considered here can achieve close to predictive accuracy. In particular, the non-local vdW-DF2 functional is revealed to be the best performer for biomolecules, while among the vdW-corrected DFT methods, uMBD is also recommended as a less accurate but faster alternative.

Kim Minho, Gould Tim, Rocca Dario, Lebègue Sébastien

2020-Sep-23

Radiology Radiology

A Quality Control System for Automated Prostate Segmentation on T2-Weighted MRI.

In Diagnostics (Basel, Switzerland)

Computer-aided detection and diagnosis (CAD) systems have the potential to improve robustness and efficiency compared to traditional radiological reading of magnetic resonance imaging (MRI). Fully automated segmentation of the prostate is a crucial step of CAD for prostate cancer, but visual inspection is still required to detect poorly segmented cases. The aim of this work was therefore to establish a fully automated quality control (QC) system for prostate segmentation based on T2-weighted MRI. Four different deep learning-based segmentation methods were used to segment the prostate for 585 patients. First order, shape and textural radiomics features were extracted from the segmented prostate masks. A reference quality score (QS) was calculated for each automated segmentation in comparison to a manual segmentation. A least absolute shrinkage and selection operator (LASSO) was trained and optimized on a randomly assigned training dataset (N = 1756, 439 cases from each segmentation method) to build a generalizable linear regression model based on the radiomics features that best estimated the reference QS. Subsequently, the model was used to estimate the QSs for an independent testing dataset (N = 584, 146 cases from each segmentation method). The mean ± standard deviation absolute error between the estimated and reference QSs was 5.47 ± 6.33 on a scale from 0 to 100. In addition, we found a strong correlation between the estimated and reference QSs (rho = 0.70). In conclusion, we developed an automated QC system that may be helpful for evaluating the quality of automated prostate segmentations.

Sunoqrot Mohammed R S, Selnæs Kirsten M, Sandsmark Elise, Nketiah Gabriel A, Zavala-Romero Olmo, Stoyanova Radka, Bathen Tone F, Elschot Mattijs

2020-Sep-18

MRI, computer-aided detection and diagnosis, deep learning, machine learning, prostate, quality control, radiomics, segmentation

General General

Two-Level LSTM for Sentiment Analysis With Lexicon Embedding and Polar Flipping.

In IEEE transactions on cybernetics

Sentiment analysis is a key component in various text mining applications. Numerous sentiment classification techniques, including conventional and deep-learning-based methods, have been proposed in the literature. In most existing methods, a high-quality training set is assumed to be given. Nevertheless, constructing a high-quality training set that consists of highly accurate labels is challenging in real applications. This difficulty stems from the fact that text samples usually contain complex sentiment representations, and their annotation is subjective. We address this challenge in this study by leveraging a new labeling strategy and utilizing a two-level long short-term memory network to construct a sentiment classifier. Lexical cues are useful for sentiment analysis, and they have been utilized in conventional studies. For example, polar and negation words play important roles in sentiment analysis. A new encoding strategy, that is, ρ-hot encoding, is proposed to alleviate the drawbacks of one-hot encoding and, thus, effectively incorporate useful lexical cues. Moreover, the sentimental polarity of a word may change in different sentences due to label noise or context. A flipping model is proposed to model the polar flipping of words in a sentence. We compile three Chinese datasets on the basis of our label strategy and proposed methodology. Experiments demonstrate that the proposed method outperforms state-of-the-art algorithms on both benchmark English data and our compiled Chinese data.

Wu Ou, Yang Tao, Li Mengyang, Li Ming

2020-Sep-23

Pathology Pathology

Multiplex Cellular Communities in Multi-Gigapixel Colorectal Cancer Histology Images for Tissue Phenotyping.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

In computational pathology, automated tissue phenotyping in cancer histology images is a fundamental tool for profiling tumor microenvironments. Current tissue phenotyping methods use features derived from image patches which may not carry biological significance. In this work, we propose a novel multiplex cellular community-based algorithm for tissue phenotyping integrating cell-level features within a graph-based hierarchical framework. We demonstrate that such integration offers better performance compared to prior deep learning and texture-based methods as well as to cellular community based methods using uniplex networks. To this end, we construct celllevel graphs using texture, alpha diversity and multi-resolution deep features. Using these graphs, we compute cellular connectivity features which are then employed for the construction of a patch-level multiplex network. Over this network, we compute multiplex cellular communities using a novel objective function. The proposed objective function computes a low-dimensional subspace from each cellular network and subsequently seeks a common low-dimensional subspace using the Grassmann manifold. We evaluate our proposed algorithm on three publicly available datasets for tissue phenotyping, demonstrating a significant improvement over existing state-of-the-art methods.

Javed Sajid, Mahmood Arif, Werghi Naoufel, Benes Ksenija, Rajpoot Nasir

2020-Sep-23

General General