Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Category articles

General General

Signal identification system for developing rehabilitative device using deep learning algorithms.

In Artificial intelligence in medicine ; h5-index 34.0

Paralyzed patients were increasing day by day. Some of the neurodegenerative diseases like amyotrophic lateral sclerosis, Brainstem Leison, Stupor and Muscular dystrophy affect the muscle movements in the body. The affected persons were unable to migrate. To overcome from their problem they need some assistive technology with the help of bio signals. Electrooculogram (EOG) based Human Computer Interaction (HCI) is one of the technique used in recent days to overcome such problem. In this paper we clearly check the possibilities of creating nine states HCI by our proposed method. Signals were captured through five electrodes placed on the subjects face around the eyes. These signals were amplified with ADT26 bio amplifier, filtered with notch filter, and processed with reference power and band power techniques to extract features to detect the eye movements and mapped with Time Delay Neural Network to classify the eye movements to generate control signal to control external hardware devices. Our experimental study reports that maximum average classification of 91.09% for reference power feature and 91.55%-for band power feature respectively. The obtained result confirms that band power features with TDNN network models shows better performance than reference features for all subjects. From this outcome we conclude that band power features with TDNN network models was more suitable for classifying the eleven difference eye movements for individual subjects. To validate the result obtained from this method we categorize the subjects in age wise to check the accuracy of the system. Single trail analysis was conducted in offline to identify the recognizing accuracy of the proposed system. The result summarize that band power features with TDNN network models exceed the reference power with TDNN network model used in this study. Through the outcome we conclude that that band power features with TDNN network was more suitable for designing EOG based HCI in offline mode.

Tang Wenping, Wang Aiqun, Ramkumar S, Nair Radeep Krishna Radhakrishnan

2020-Jan

Amyotrophic lateral sclerosis, Elecctrooculograpy, Human computer interface, Spinal card injury, Time delay neural network

General General

Artificial intelligence and the future of psychiatry: Insights from a global physician survey.

In Artificial intelligence in medicine ; h5-index 34.0

BACKGROUND : Futurists have predicted that new autonomous technologies, embedded with artificial intelligence (AI) and machine learning (ML), will lead to substantial job losses in many sectors disrupting many aspects of healthcare. Mental health appears ripe for such disruption given the global illness burden, stigma, and shortage of care providers.

OBJECTIVE : To characterize the global psychiatrist community's opinion regarding the potential of future autonomous technology (referred to here as AI/ML) to replace key tasks carried out in mental health practice.

DESIGN : Cross sectional, random stratified sample of psychiatrists registered with Sermo, a global networking platform open to verified and licensed physicians.

MAIN OUTCOME MEASURES : We measured opinions about the likelihood that AI/ML tools would be able to fully replace - not just assist - the average psychiatrist in performing 10 key psychiatric tasks. Among those who considered replacement likely, we measured opinions about how many years from now such a capacity might emerge. We also measured psychiatrist's perceptions about whether benefits of AI/ML would outweigh the risks.

RESULTS : Survey respondents were 791 psychiatrists from 22 countries representing North America, South America, Europe and Asia-Pacific. Only 3.8 % of respondents felt it was likely that future technology would make their jobs obsolete and only 17 % felt that future AI/ML was likely to replace a human clinician for providing empathetic care. Documenting and updating medical records (75 %) and synthesizing information (54 %) were the two tasks where a majority predicted that AI/ML could fully replace human psychiatrists. Female- and US-based doctors were more uncertain that the benefits of AI would outweigh risks than male- and non-US doctors, respectively. Around one in 2 psychiatrists did however predict that their jobs would be substantially changed by AI/ML.

CONCLUSIONS : Our findings provide compelling insights into how physicians think about AI/ML which in turn may help us better integrate technology and reskill doctors to enhance mental health care.

Doraiswamy P Murali, Blease Charlotte, Bodner Kaylee

2020-Jan

Autonomous agents, Deep learning, Empathy, Mental health

General General

Artificial plant optimization algorithm to detect heart rate & presence of heart disease using machine learning.

In Artificial intelligence in medicine ; h5-index 34.0

In today's world, cardiovascular diseases are prevalent becoming the leading cause of death; more than half of the cardiovascular diseases are due to Coronary Heart Disease (CHD) which generates the demand of predicting them timely so that people can take precautions or treatment before it becomes fatal. For serving this purpose a Modified Artificial Plant Optimization (MAPO) algorithm has been proposed which can be used as an optimal feature selector along with other machine learning algorithms to predict the heart rate using the fingertip video dataset which further predicts the presence or absence of Coronary Heart Disease in an individual at the moment. Initially, the video dataset has been pre-processed, noise is filtered and then MAPO is applied to predict the heart rate with a Pearson correlation and Standard Error Estimate of 0.9541 and 2.418 respectively. The predicted heart rate is used as a feature in other two datasets and MAPO is again applied to optimize the features of both datasets. Different machine learning algorithms are then applied to the optimized dataset to predict values for presence of current heart disease. The result shows that MAPO reduces the dimensionality to the most significant information with comparable accuracies for different machine learning models with maximum dimensionality reduction of 81.25%. MAPO has been compared with other optimizers and outperforms them with better accuracy.

Sharma Prerna, Choudhary Krishna, Gupta Kshitij, Chawla Rahul, Gupta Deepak, Sharma Arun

2020-Jan

Artificial neural network, Extreme gradient boosting, Machine learning, Modified artificial plant optimization algorithm, Savitzky-Golay filter

General General

Prediction of fetal weight at varying gestational age in the absence of ultrasound examination using ensemble learning.

In Artificial intelligence in medicine ; h5-index 34.0

Obstetric ultrasound examination of physiological parameters has been mainly used to estimate the fetal weight during pregnancy and baby weight before labour to monitor fetal growth and reduce prenatal morbidity and mortality. However, the problem is that ultrasound estimation of fetal weight is subject to population's difference, strict operating requirements for sonographers, and poor access to ultrasound in low-resource areas. Inaccurate estimations may lead to negative perinatal outcomes. This study aims to predict fetal weight at varying gestational age in the absence of ultrasound examination within a certain accuracy. We consider that machine learning can provide an accurate estimation for obstetricians alongside traditional clinical practices, as well as an efficient and effective support tool for pregnant women for self-monitoring. We present a robust methodology using a data set comprising 4212 intrapartum recordings. The cubic spline function is used to fit the curves of several key characteristics that are extracted from ultrasound reports. A number of simple and powerful machine learning algorithms are trained, and their performance is evaluated with real test data. We also propose a novel evaluation performance index called the intersection-over-union (loU) for our study. The results are encouraging using an ensemble model consisting of Random Forest, XGBoost, and LightGBM algorithms. The experimental results show the loU between predicted range of fetal weight at any gestational age that is given by the ensemble model and ultrasound respectively. The machine learning based approach applied in our study is able to predict, with a high accuracy, fetal weight at varying gestational age in the absence of ultrasound examination.

Lu Yu, Fu Xianghua, Chen Fangxiong, Wong Kelvin K L

2020-Jan

Ensemble learning, Fetal weight estimation, Genetic algorithm, Intersection-over-union, Machine learning

General General

Using multi-layer perceptron with Laplacian edge detector for bladder cancer diagnosis.

In Artificial intelligence in medicine ; h5-index 34.0

In this paper, the urinary bladder cancer diagnostic method which is based on Multi-Layer Perceptron and Laplacian edge detector is presented. The aim of this paper is to investigate the implementation possibility of a simpler method (Multi-Layer Perceptron) alongside commonly used methods, such as Deep Learning Convolutional Neural Networks, for the urinary bladder cancer detection. The dataset used for this research consisted of 1997 images of bladder cancer and 986 images of non-cancer tissue. The results of the conducted research showed that using Multi-Layer Perceptron trained and tested with images pre-processed with Laplacian edge detector are achieving AUC value up to 0.99. When different image sizes are compared it can be seen that the best results are achieved if 50×50 and 100×100 images were used.

Lorencin Ivan, Anđelić Nikola, Španjol Josip, Car Zlatan

2020-Jan

Artificial intelligence, Image pre-processing, Laplacian edge detector, Multi-layer perceptron, Urinary bladder cancer

Ophthalmology Ophthalmology

Implementation of artificial intelligence in medicine: Status analysis and development suggestions.

In Artificial intelligence in medicine ; h5-index 34.0

The general public's attitudes, demands, and expectations regarding medical AI could provide guidance for the future development of medical AI to satisfy the increasing needs of doctors and patients. The objective of this study is to investigate public perceptions, receptivity, and demands regarding the implementation of medical AI. An online questionnaire was designed to investigate the perceptions, receptivity, and demands of general public regarding medical AI between October 13 and October 30, 2018. The distributions of the current achievements, public perceptions, receptivity, and demands among individuals in different lines of work (i.e., healthcare vs non-healthcare) and different age groups were assessed by performing descriptive statistics. The factors associated with public receptivity of medical AI were assessed using a linear regression model. In total, 2,780 participants from 22 provinces were enrolled. Healthcare workers accounted for 54.3 % of all participants. There was no significant difference between the healthcare workers and non-healthcare workers in the high proportion (99 %) of participants expressing acceptance of AI (p = 0.8568), but remarkable distributional differences were observed in demands (p < 0.001 for both demands for AI assistance and the desire for AI improvements) and perceptions (p < 0.001 for safety, validity, trust, and expectations). High levels of receptivity (approximately 100 %), demands (approximately 80 %), and expectations (100 %) were expressed among different age groups. The receptivity of medical AI among the non-healthcare workers was associated with gender, educational qualifications, and demands and perceptions of AI. There was a very large gap between current availability of and public demands for intelligence services (p < 0.001). More than 90 % of healthcare workers expressed a willingness to devote time to learning about AI and participating in AI research. The public exhibits a high level of receptivity regarding the implementation of medical AI. To date, the achievements have been rewarding, and further advancements are required to satisfy public demands. There is a strong demand for intelligent assistance in many medical areas, including imaging and pathology departments, outpatient services, and surgery. More contributions are imperative to facilitate integrated and advantageous implementation in medical AI.

Xiang Yifan, Zhao Lanqin, Liu Zhenzhen, Wu Xiaohang, Chen Jingjing, Long Erping, Lin Duoru, Zhu Yi, Chen Chuan, Lin Zhuoling, Lin Haotian

2020-Jan

Current implementation, Future development, Medical artificial intelligence, Public demand

Internal Medicine Internal Medicine

Predicting the Associations between Meridians and Chinese Traditional Medicine Using a Cost-Sensitive Graph Convolutional Neural Network.

In International journal of environmental research and public health ; h5-index 73.0

Natural products are the most important and commonly used in Traditional Chinese Medicine (TCM) for healthcare and disease prevention in East-Asia. Although the Meridian system of TCM was established several thousand years ago, the rationale of Meridian classification based on the ingredient compounds remains poorly understood. A core challenge for the traditional machine learning approaches for chemical activity prediction is to encode molecules into fixed length vectors but ignore the structural information of the chemical compound. Therefore, we apply a cost-sensitive graph convolutional neural network model to learn local and global topological features of chemical compounds, and discover the associations between TCM and their Meridians. In the experiments, we find that the performance of our approach with the area under the receiver operating characteristic curve (ROC-AUC) of 0.82 which is better than the traditional machine learning algorithm and also obtains 8%-13% improvement comparing with the state-of-the-art methods. We investigate the powerful ability of deep learning approach to learn the proper molecular descriptors for Meridian prediction and to provide novel insights into the complementary and alternative medicine of TCM.

Yeh Hsiang-Yuan, Chao Chia-Ter, Lai Yi-Pei, Chen Huei-Wen

2020-Jan-23

Meridian classification, Traditional Chinese Medicine, graph convolutional neural network

Radiology Radiology

Fidelity imposed network edit (FINE) for solving ill-posed image reconstruction.

In NeuroImage ; h5-index 117.0

Deep learning (DL) is increasingly used to solve ill-posed inverse problems in medical imaging, such as reconstruction from noisy and/or incomplete data, as DL offers advantages over conventional methods that rely on explicit image features and hand engineered priors. However, supervised DL-based methods may achieve poor performance when the test data deviates from the training data, for example, when it has pathologies not encountered in the training data. Furthermore, DL-based image reconstructions do not always incorporate the underlying forward physical model, which may improve performance. Therefore, in this work we introduce a novel approach, called fidelity imposed network edit (FINE), which modifies the weights of a pre-trained reconstruction network for each case in the testing dataset. This is achieved by minimizing an unsupervised fidelity loss function that is based on the forward physical model. FINE is applied to two important inverse problems in neuroimaging: quantitative susceptibility mapping (QSM) and under-sampled image reconstruction in MRI. Our experiments demonstrate that FINE can improve reconstruction accuracy.

Zhang Jinwei, Liu Zhe, Zhang Shun, Zhang Hang, Spincemaille Pascal, Nguyen Thanh D, Sabuncu Mert R, Wang Yi

2020-Jan-22

Data fidelity, Deep learning, Inverse problem, Quantitative susceptibility mapping, Under-sampled image reconstruction

General General

Coronary Artery Disease Diagnosis; Ranking the Significant Features Using a Random Trees Model.

In International journal of environmental research and public health ; h5-index 73.0

Heart disease is one of the most common diseases in middle-aged citizens. Among the vast number of heart diseases, coronary artery disease (CAD) is considered as a common cardiovascular disease with a high death rate. The most popular tool for diagnosing CAD is the use of medical imaging, e.g., angiography. However, angiography is known for being costly and also associated with a number of side effects. Hence, the purpose of this study is to increase the accuracy of coronary heart disease diagnosis through selecting significant predictive features in order of their ranking. In this study, we propose an integrated method using machine learning. The machine learning methods of random trees (RTs), decision tree of C5.0, support vector machine (SVM), and decision tree of Chi-squared automatic interaction detection (CHAID) are used in this study. The proposed method shows promising results and the study confirms that the RTs model outperforms other models.

Joloudari Javad Hassannataj, Joloudari Edris Hassannataj, Saadatfar Hamid, GhasemiGol Mohammad, Razavi Seyyed Mohammad, Mosavi Amir, Nabipour Narjes, Shamshirband Shahaboddin, Nadai Laszlo

2020-Jan-23

big data, coronary artery disease, data science, ensemble model, health informatics, heart disease diagnosis, industry 4.0, machine learning, predictive model, random forest

Cardiology Cardiology

Application of data mining in a cohort of Italian subjects undergoing myocardial perfusion imaging at an academic medical center.

In Computer methods and programs in biomedicine ; h5-index 0.0

INTRODUCTION : Coronary artery disease (CAD) is still one of the primary causes of death in the developed countries. Stress single-photon emission computed tomography is used to evaluate myocardial perfusion and ventricular function in patients with suspected or known CAD. This study sought to test data mining and machine learning tools and to compare some supervised learning algorithms in a large cohort of Italian subjects with suspected or known CAD who underwent stress myocardial perfusion imaging.

METHODS : The dataset consisted of 10,265 patients with suspected or known CAD. The analysis was conducted using Knime analytics platform in order to implement Random Forests, C4.5, Gradient boosted tree, Naïve Bayes, and K nearest neighbor (KNN) after a procedure of features filtering. K-fold cross-validation was employed.

RESULTS : Accuracy, error, precision, recall, and specificity were computed through the above-mentioned algorithms. Random Forests and gradients boosted trees obtained the highest accuracy (>95%), while it was comprised between 83% and 88%. The highest value for sensitivity and specificity was obtained by C4.5 (99.3%) and by Gradient boosted tree (96.9%). Naïve Bayes had the lowest precision (70.9%) and specificity (72.0%), KNN the lowest recall and sensitivity (79.2%).

CONCLUSIONS : The high scores obtained by the implementation of the algorithms suggests health facilities consider the idea of including services of advanced data analysis to help clinicians in decision-making. Similar applications of this kind of study in other contexts could support this idea.

Ricciardi Carlo, Cantoni Valeria, Improta Giovanni, Iuppariello Luigi, Latessa Imma, Cesarelli Mario, Triassi Maria, Cuocolo Alberto

2020-Jan-16

Analytics platform, Cardiology, Data mining, Decision-making, Myocardial perfusion imaging

Public Health Public Health

Association of HLA-DRB1*09:01 with tIgE levels among African ancestry individuals with asthma.

In The Journal of allergy and clinical immunology ; h5-index 0.0

BACKGROUND : Asthma is a complex chronic inflammatory disease of the airways. Association studies between HLA and asthma were first reported in the 1970's, and yet, the precise role of HLA alleles in asthma is not fully understood. Numerous genome-wide association studies were recently conducted on asthma, but were always limited to simple genetic markers (SNPs) and not complex HLA gene polymorphisms (alleles/haplotypes), therefore not capturing the biological relevance of this complex locus for asthma pathogenesis.

OBJECTIVE : To run the first HLA-centric association study with asthma and specific asthma-related phenotypes in a large cohort of African ancestry individuals.

METHODS : We collected high-density genomics data for the CAAPA participants (Consortium on Asthma among African-ancestry Populations in the Americas, N=4,993). Using computer-intensive machine-learning attribute bagging methods to infer HLA alleles, and Easy-HLA to infer HLA 5-gene haplotypes, we conducted a high-throughput HLA-centric association study of asthma susceptibility and total serum IgE levels (tIgE) in subjects with and without asthma.

RESULTS : Among the 1,607 individuals with asthma, 972 had available tIgE levels with a mean tIgE level of 198.7 IU.ml-1. We could not identify any association with asthma susceptibility. However, we showed that HLA-DRB1*09:01 was associated with increased tIgE levels (P=8.5x10-4, weighted effect size 0.51 [0.15-0.87]).

CONCLUSIONS : We identified for the first time an HLA allele associated with tIgE levels in African ancestry individuals with asthma. Our report emphasizes that by leveraging powerful computational machine-learning methods, specific/extreme phenotypes, and population diversity, we can explore HLA gene polymorphisms in depth and reveal the full extent of complex disease associations.

Vince Nicolas, Limou Sophie, Daya Michelle, Morii Wataru, Rafaels Nicholas, Geffard Estelle, Douillard Venceslas, Walencik Alexandre, Boorgula Meher Preethi, Chavan Sameer, Vergara Candelaria, Ortega Victor E, Wilson James G, Lange Leslie A, Watson Harold, Nicolae Dan L, Meyers Deborah A, Hansel Nadia N, Ford Jean G, Faruque Mezbah U, Bleecker Eugene R, Campbell Monica, Beaty Terri H, Ruczinski Ingo, Mathias Rasika A, Taub Margaret A, Ober Carole, Noguchi Emiko, Barnes Kathleen C, Torgerson Dara, Gourraud Pierre-Antoine

2020-Jan-22

Admixture, Asthma, Atopy, CAAPA, HLA, Imputation, tIgE levels

Surgery Surgery

Effect of a deep-learning computer-aided detection system on adenoma detection during colonoscopy (CADe-DB trial): a double-blind randomised study.

In The lancet. Gastroenterology & hepatology ; h5-index 0.0

BACKGROUND : Colonoscopy with computer-aided detection (CADe) has been shown in non-blinded trials to improve detection of colon polyps and adenomas by providing visual alarms during the procedure. We aimed to assess the effectiveness of a CADe system that avoids potential operational bias.

METHODS : We did a double-blind randomised trial at the endoscopy centre in Caotang branch hospital of Sichuan Provincial People's Hospital in China. We enrolled consecutive patients (aged 18-75 years) presenting for diagnostic and screening colonoscopy. We excluded patients with a history of inflammatory bowel disease, colorectal cancer, or colorectal surgery or who had a contraindication for biopsy; we also excluded patients who had previously had an unsuccessful colonoscopy and who had a high suspicion for polyposis syndromes, inflammatory bowel disease, and colorectal cancer. We allocated patients (1:1) to colonoscopy with either the CADe system or a sham system. Randomisation was by computer-generated random number allocation. Patients and the endoscopist were unaware of the random assignment. To achieve masking, the output of the system was shown on a second monitor that was only visible to an observer who was responsible for reporting the alerts. The primary outcome was the adenoma detection rate (ADR), which is the proportion of individuals having a complete colonoscopy, from caecum to rectum, who had one or more adenomas detected. The primary analysis was per protocol. We also analysed characteristics of polyps and adenomas missed initially by endoscopists but detected by the CADe system. This trial is complete and is registered with http://www.chictr.org.cn, ChiCTR1800017675.

FINDINGS : Between Sept 3, 2018, and Jan 11, 2019, 1046 patients were enrolled to the study, of whom 36 were excluded before randomisation, 508 were allocated colonoscopy with polyp detection using the CADe system, and 502 were allocated colonoscopy with the sham system. After further excluding patients who met exclusion criteria, 484 patients in the CADe group and 478 in the sham group were included in analyses. The ADR was significantly greater in the CADe group than in the sham group, with 165 (34%) of 484 patients allocated to the CADe system having one or more adenomas detected versus 132 (28%) of 478 allocated to the sham system (odds ratio 1·36, 95% CI 1·03-1·79; p=0·030). No complications were reported among all colonoscopy procedures. Polyps initially missed by the endoscopist but identified by the CADe system were generally small in size, isochromatic, flat in shape, had an unclear boundary, were partly behind colon folds, and were on the edge of the visual field.

INTERPRETATION : Polyps initially missed by the endoscopist had characteristics that are sometimes difficult for skilled endoscopists to recognise. Such polyps could be detected using a high-performance CADe system during colonoscopy. The effect of CADe during colonoscopy on the incidence of interval colorectal cancer should be investigated.

FUNDING : None.

Wang Pu, Liu Xiaogang, Berzin Tyler M, Glissen Brown Jeremy R, Liu Peixi, Zhou Chao, Lei Lei, Li Liangping, Guo Zhenzhen, Lei Shan, Xiong Fei, Wang Han, Song Yan, Pan Yan, Zhou Guanyu

2020-Jan-22

oncology Oncology

Comparison of Statistical Machine Learning Models for Rectal Protocol Compliance in Prostate External Beam Radiation Therapy.

In Medical physics ; h5-index 59.0

PURPOSE : Limiting the dose to the rectum can be one of the most challenging aspects of creating a dosimetric external beam radiation therapy (EBRT) plan for prostate cancer treatment. Rectal sparing devices such as hydrogel spacers offer the prospect of increased space between the prostate and rectum, causing reduced rectal dose and potentially reduced injury. This study sought to help identify patients at higher risk of developing rectal injury based on estimated rectal dosimetry compliance prior to the EBRT simulation and planning procedure. Three statistical machine learning methods were compared for their ability to predict rectal dose outcomes with varied classification thresholds applied.

METHODS : Prostate cancer patients treated with conventionally fractionated EBRT to a reference dose of 74-78 Gy were invited to participate in the study. The dose volume histogram data from each dosimetric plan was used to quantify planned rectal volume receiving 50%, 83% 96% and 102% of the reference dose. Patients were classified into two groups for each of these dose levels: either meeting tolerance by having a rectal volume less than a clinically acceptable threshold for the dose level (Y) or violating the tolerance by having a rectal volume greater than the threshold for the dose level (N). Logistic regression, classification and regression tree, and random forest models were compared for their ability to discriminate between class outcomes. Performance metrics included area under the receiver operator characteristic curve (AUC), sensitivity, specificity, positive predictive value, and negative predictive value. Finally, three classification threshold levels were evaluated for their impact on model performance.

RESULTS : A total of 176 eligible participants were recruited. Variable importance differed between model methods. AUC performance varied greatly across the different rectal dose levels and between models. Logistic regression performed best at the 83% reference dose level with an AUC value of 0.844, while random forest demonstrated best discrimination at the 96% reference dose level with an AUC value of 0.733. In addition to the standard classification probability threshold of 50%, the clinically representative threshold of 10%, and the best threshold from each AUC plot was applied to compare metrics. This showed that using a 50% threshold and the best threshold from the AUC plots yields similar results. Conversely, applying the more conservative clinical threshold of 10% maximised the sensitivity at V83_RD and V96_RD for all model types. Based on the combination of the metrics, logistic regression would be the recommendation for rectal protocol compliance prediction at the 83% reference dose level, and random forest for the 96% reference dose level, particularly when using the clinical probability threshold of 10%.

CONCLUSIONS : This study demonstrated the efficacy of statistical machine learning models on rectal protocol compliance prediction for prostate cancer EBRT dosimetric planning. Both logistic regression and random forest modelling approaches demonstrated good discriminative ability for predicting class outcomes in the upper dose levels. Application of a conservative clinical classification threshold maximised sensitivity and further confirmed the value of logistic regression and random forest models over classification and regression tree.

Jones Scott, Hargrave Catriona, Deegan Timothy, Holt Tanya, Mengersen Kerrie

2020-Jan-25

Prostate cancer, machine learning, radiation therapy, rectal dose

General General

Deep Learning-Based Single-Cell Optical Image Studies.

In Cytometry. Part A : the journal of the International Society for Analytical Cytology ; h5-index 0.0

Optical imaging technology that has the advantages of high sensitivity and cost-effectiveness greatly promotes the progress of nondestructive single-cell studies. Complex cellular image analysis tasks such as three-dimensional reconstruction call for machine-learning technology in cell optical image research. With the rapid developments of high-throughput imaging flow cytometry, big data cell optical images are always obtained that may require machine learning for data analysis. In recent years, deep learning has been prevalent in the field of machine learning for large-scale image processing and analysis, which brings a new dawn for single-cell optical image studies with an explosive growth of data availability. Popular deep learning techniques offer new ideas for multimodal and multitask single-cell optical image research. This article provides an overview of the basic knowledge of deep learning and its applications in single-cell optical image studies. We explore the feasibility of applying deep learning techniques to single-cell optical image analysis, where popular techniques such as transfer learning, multimodal learning, multitask learning, and end-to-end learning have been reviewed. Image preprocessing and deep learning model training methods are then summarized. Applications based on deep learning techniques in the field of single-cell optical image studies are reviewed, which include image segmentation, super-resolution image reconstruction, cell tracking, cell counting, cross-modal image reconstruction, and design and control of cell imaging systems. In addition, deep learning in popular single-cell optical imaging techniques such as label-free cell optical imaging, high-content screening, and high-throughput optical imaging cytometry are also mentioned. Finally, the perspectives of deep learning technology for single-cell optical image analysis are discussed. © 2020 International Society for Advancement of Cytometry.

Sun Jing, Tárnok Attila, Su Xuantao

2020-Jan-25

biomedical image analysis, single-cell analysis, image cytometry, optical microscopy, deep learning, convolutional neural network

Ophthalmology Ophthalmology

Automated classification of normal and Stargardt disease optical coherence tomography images using deep learning.

In Acta ophthalmologica ; h5-index 41.0

PURPOSE : Recent advances in deep learning have seen an increase in its application to automated image analysis in ophthalmology for conditions with a high prevalence. We wanted to identify whether deep learning could be used for the automated classification of optical coherence tomography (OCT) images from patients with Stargardt disease (STGD) using a smaller dataset than traditionally used.

METHODS : Sixty participants with STGD and 33 participants with a normal retinal OCT were selected, and a single OCT scan containing the centre of the fovea was selected as the input data. Two approaches were used: Model 1 - a pretrained convolutional neural network (CNN); Model 2 - a new CNN architecture. Both models were evaluated on their accuracy, sensitivity, specificity and Jaccard similarity score (JSS).

RESULTS : About 102 OCT scans from participants with a normal retinal OCT and 647 OCT scans from participants with STGD were selected. The highest results were achieved when both models were implemented as a binary classifier: Model 1 - accuracy 99.6%, sensitivity 99.8%, specificity 98.0% and JSS 0.990; Model 2 - accuracy 97.9%, sensitivity 97.9%, specificity 98.0% and JSS 0.976.

CONCLUSION : The deep learning classification models used in this study were able to achieve high accuracy despite using a smaller dataset than traditionally used and are effective in differentiating between normal OCT scans and those from patients with STGD. This preliminary study provides promising results for the application of deep learning to classify OCT images from patients with inherited retinal diseases.

Shah Mital, Roomans Ledo Ana, Rittscher Jens

2020-Jan-24

Stargardt disease, deep learning, image analysis, machine learning, optical coherence tomography, retinal degeneration

Surgery Surgery

A Comparative Classification Analysis of Abdominal Aortic Aneurysms by Machine Learning Algorithms.

In Annals of biomedical engineering ; h5-index 52.0

The objective of this work was to perform image-based classification of abdominal aortic aneurysms (AAA) based on their demographic, geometric, and biomechanical attributes. We retrospectively reviewed existing demographics and abdominal computed tomography angiography images of 100 asymptomatic and 50 symptomatic AAA patients who received an elective or emergent repair, respectively, within 1-6 months of their last follow up. An in-house script developed within the MATLAB computational platform was used to segment the clinical images, calculate 53 descriptors of AAA geometry, and generate volume meshes suitable for finite element analysis (FEA). Using a third party FEA solver, four biomechanical markers were calculated from the wall stress distributions. Eight machine learning algorithms (MLA) were used to develop classification models based on the discriminatory potential of the demographic, geometric, and biomechanical variables. The overall classification performance of the algorithms was assessed by the accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and precision of their predictions. The generalized additive model (GAM) was found to have the highest accuracy (87%), AUC (89%), and sensitivity (78%), and the third highest specificity (92%), in classifying the individual AAA as either asymptomatic or symptomatic. The k-nearest neighbor classifier yielded the highest specificity (96%). GAM used seven markers (six geometric and one biomechanical) to develop the classifier. The maximum transverse dimension, the average wall thickness at the maximum diameter, and the spatially averaged wall stress were found to be the most influential markers in the classification analysis. A second classification analysis revealed that using maximum diameter alone results in a lower accuracy (79%) than using GAM with seven geometric and biomechanical markers. We infer from these results that biomechanical and geometric measures by themselves are not sufficient to discriminate adequately between population samples of asymptomatic and symptomatic AAA, whereas MLA offer a statistical approach to stratification of rupture risk by combining demographic, geometric, and biomechanical attributes of patient-specific AAA.

Rengarajan Balaji, Wu Wei, Wiedner Crystal, Ko Daijin, Muluk Satish C, Eskandari Mark K, Menon Prahlad G, Finol Ender A

2020-Jan-24

Abdominal aortic aneurysm, Generalized additive model, Image segmentation, Machine learning, Rupture risk evaluation

Internal Medicine Internal Medicine

Application of artificial intelligence using convolutional neural networks in determining the invasion depth of esophageal squamous cell carcinoma.

In Esophagus : official journal of the Japan Esophageal Society ; h5-index 0.0

OBJECTIVES : In Japan, endoscopic resection (ER) is often used to treat esophageal squamous cell carcinoma (ESCC) when invasion depths are diagnosed as EP-SM1, whereas ESCC cases deeper than SM2 are treated by surgical operation or chemoradiotherapy. Therefore, it is crucial to determine the invasion depth of ESCC via preoperative endoscopic examination. Recently, rapid progress in the utilization of artificial intelligence (AI) with deep learning in medical fields has been achieved. In this study, we demonstrate the diagnostic ability of AI to measure ESCC invasion depth.

METHODS : We retrospectively collected 1751 training images of ESCC at the Cancer Institute Hospital, Japan. We developed an AI-diagnostic system of convolutional neural networks using deep learning techniques with these images. Subsequently, 291 test images were prepared and reviewed by the AI-diagnostic system and 13 board-certified endoscopists to evaluate the diagnostic accuracy.

RESULTS : The AI-diagnostic system detected 95.5% (279/291) of the ESCC in test images in 10 s, analyzed the 279 images and correctly estimated the invasion depth of ESCC with a sensitivity of 84.1% and accuracy of 80.9% in 6 s. The accuracy score of this system exceeded those of 12 out of 13 board-certified endoscopists, and its area under the curve (AUC) was greater than the AUCs of all endoscopists.

CONCLUSIONS : The AI-diagnostic system demonstrated a higher diagnostic accuracy for ESCC invasion depth than those of endoscopists and, therefore, can be potentially used in ESCC diagnostics.

Tokai Yoshitaka, Yoshio Toshiyuki, Aoyama Kazuharu, Horie Yoshimasa, Yoshimizu Shoichi, Horiuchi Yusuke, Ishiyama Akiyoshi, Tsuchida Tomohiro, Hirasawa Toshiaki, Sakakibara Yuko, Yamada Takuya, Yamaguchi Shinjiro, Fujisaki Junko, Tada Tomohiro

2020-Jan-24

Artificial intelligence, Esophageal cancer, Squamous cell carcinoma

General General

Benchmarking Deep Learning Architectures for Predicting Readmission to the ICU and Describing Patients-at-Risk.

In Scientific reports ; h5-index 158.0

To compare different deep learning architectures for predicting the risk of readmission within 30 days of discharge from the intensive care unit (ICU). The interpretability of attention-based models is leveraged to describe patients-at-risk. Several deep learning architectures making use of attention mechanisms, recurrent layers, neural ordinary differential equations (ODEs), and medical concept embeddings with time-aware attention were trained using publicly available electronic medical record data (MIMIC-III) associated with 45,298 ICU stays for 33,150 patients. Bayesian inference was used to compute the posterior over weights of an attention-based model. Odds ratios associated with an increased risk of readmission were computed for static variables. Diagnoses, procedures, medications, and vital signs were ranked according to the associated risk of readmission. A recurrent neural network, with time dynamics of code embeddings computed by neural ODEs, achieved the highest average precision of 0.331 (AUROC: 0.739, F1-Score: 0.372). Predictive accuracy was comparable across neural network architectures. Groups of patients at risk included those suffering from infectious complications, with chronic or progressive conditions, and for whom standard medical care was not suitable. Attention-based networks may be preferable to recurrent networks if an interpretable model is required, at only marginal cost in predictive accuracy.

Barbieri Sebastiano, Kemp James, Perez-Concha Oscar, Kotwal Sradha, Gallagher Martin, Ritchie Angus, Jorm Louisa

2020-Jan-24

General General

Mobile Real-Time Grasshopper Detection and Data Aggregation Framework.

In Scientific reports ; h5-index 158.0

Insects of the family Orthoptera: Acrididae including grasshoppers and locust devastate crops and eco-systems around the globe. The effective control of these insects requires large numbers of trained extension agents who try to spot concentrations of the insects on the ground so that they can be destroyed before they take flight. This is a challenging and difficult task. No automatic detection system is yet available to increase scouting productivity, data scale and fidelity. Here we demonstrate MAESTRO, a novel grasshopper detection framework that deploys deep learning within RBG images to detect insects. MAESTRO uses a state-of-the-art two-stage training deep learning approach. The framework can be deployed not only on desktop computers but also on edge devices without internet connection such as smartphones. MAESTRO can gather data using cloud storge for further research and in-depth analysis. In addition, we provide a challenging new open dataset (GHCID) of highly variable grasshopper populations imaged in Inner Mongolia. The detection performance of the stationary method and the mobile App are 78 and 49 percent respectively; the stationary method requires around 1000 ms to analyze a single image, whereas the mobile app uses only around 400 ms per image. The algorithms are purely data-driven and can be used for other detection tasks in agriculture (e.g. plant disease detection) and beyond. This system can play a crucial role in the collection and analysis of data to enable more effective control of this critical global pest.

Chudzik Piotr, Mitchell Arthur, Alkaseem Mohammad, Wu Yingie, Fang Shibo, Hudaib Taghread, Pearson Simon, Al-Diri Bashir

2020-Jan-24

General General

A Machine Vision-Based Method for Monitoring Scene-Interactive Behaviors of Dairy Calf.

In Animals : an open access journal from MDPI ; h5-index 0.0

Requirements for animal and dairy products are increasing gradually in emerging economic bodies. However, it is critical and challenging to maintain the health and welfare of the increasing population of dairy cattle, especially the dairy calf (up to 20% mortality in China). Animal behaviors reflect considerable information and are used to estimate animal health and welfare. In recent years, machine vision-based methods have been applied to monitor animal behaviors worldwide. Collected image or video information containing animal behaviors can be analyzed with computer languages to estimate animal welfare or health indicators. In this proposed study, a new deep learning method (i.e., an integration of background-subtraction and inter-frame difference) was developed for automatically recognizing dairy calf scene-interactive behaviors (e.g., entering or leaving the resting area, and stationary and turning behaviors in the inlet and outlet area of the resting area) based on computer vision-based technology. Results show that the recognition success rates for the calf's science-interactive behaviors of pen entering, pen leaving, staying (standing or laying static behavior), and turning were 94.38%, 92.86%, 96.85%, and 93.51%, respectively. The recognition success rates for feeding and drinking were 79.69% and 81.73%, respectively. This newly developed method provides a basis for inventing evaluation tools to monitor calves' health and welfare on dairy farms.

Guo Yangyang, He Dongjian, Chai Lilong

2020-Jan-22

animal behaviors, computer vision, dairy calf, scene-interaction

General General

Pressure injury image analysis with machine learning techniques: A systematic review on previous and possible future methods.

In Artificial intelligence in medicine ; h5-index 34.0

Pressure injuries represent a tremendous healthcare challenge in many nations. Elderly and disabled people are the most affected by this fast growing disease. Hence, an accurate diagnosis of pressure injuries is paramount for efficient treatment. The characteristics of these wounds are crucial indicators for the progress of the healing. While invasive methods to retrieve information are not only painful to the patients but may also increase the risk of infections, non-invasive techniques by means of imaging systems provide a better monitoring of the wound healing processes without causing any harm to the patients. These systems should include an accurate segmentation of the wound, the classification of its tissue types, the metrics including the diameter, area and volume, as well as the healing evaluation. Therefore, the aim of this survey is to provide the reader with an overview of imaging techniques for the analysis and monitoring of pressure injuries as an aid to their diagnosis, and proof of the efficiency of Deep Learning to overcome this problem and even outperform the previous methods. In this paper, 114 out of 199 papers retrieved from 8 databases have been analyzed, including also contributions on chronic wounds and skin lesions.

Zahia Sofia, Garcia Zapirain Maria Begoña, Sevillano Xavier, González Alejandro, Kim Paul J, Elmaghraby Adel

2020-Jan

Deep learning, Machine learning algorithms, Pressure injury, Wound image analysis

General General

An enhanced deep learning approach for brain cancer MRI images classification using residual networks.

In Artificial intelligence in medicine ; h5-index 34.0

Cancer is the second leading cause of death after cardiovascular diseases. Out of all types of cancer, brain cancer has the lowest survival rate. Brain tumors can have different types depending on their shape, texture, and location. Proper diagnosis of the tumor type enables the doctor to make the correct treatment choice and help save the patient's life. There is a high need in the Artificial Intelligence field for a Computer Assisted Diagnosis (CAD) system to assist doctors and radiologists with the diagnosis and classification of tumors. Over recent years, deep learning has shown an optimistic performance in computer vision systems. In this paper, we propose an enhanced approach for classifying brain tumor types using Residual Networks. We evaluate the proposed model on a benchmark dataset containing 3064 MRI images of 3 brain tumor types (Meningiomas, Gliomas, and Pituitary tumors). We have achieved the highest accuracy of 99% outperforming the other previous work on the same dataset.

Abdelaziz Ismael Sarah Ali, Mohammed Ammar, Hefny Hesham

2020-Jan

Artificial neural network, Cancer classification, Convolutional neural network, Deep residual network, Machine learning

General General

Predicting dementia with routine care EMR data.

In Artificial intelligence in medicine ; h5-index 34.0

Our aim is to develop a machine learning (ML) model that can predict dementia in a general patient population from multiple health care institutions one year and three years prior to the onset of the disease without any additional monitoring or screening. The purpose of the model is to automate the cost-effective, non-invasive, digital pre-screening of patients at risk for dementia. Towards this purpose, routine care data, which is widely available through Electronic Medical Record (EMR) systems is used as a data source. These data embody a rich knowledge and make related medical applications easy to deploy at scale in a cost-effective manner. Specifically, the model is trained by using structured and unstructured data from three EMR data sets: diagnosis, prescriptions, and medical notes. Each of these three data sets is used to construct an individual model along with a combined model which is derived by using all three data sets. Human-interpretable data processing and ML techniques are selected in order to facilitate adoption of the proposed model by health care providers from multiple institutions. The results show that the combined model is generalizable across multiple institutions and is able to predict dementia within one year of its onset with an accuracy of nearly 80% despite the fact that it was trained using routine care data. Moreover, the analysis of the models identified important predictors for dementia. Some of these predictors (e.g., age and hypertensive disorders) are already confirmed by the literature while others, especially the ones derived from the unstructured medical notes, require further clinical analysis.

Ben Miled Zina, Haas Kyle, Black Christopher M, Khandker Rezaul Karim, Chandrasekaran Vasu, Lipton Richard, Boustani Malaz A

2020-Jan

Dementia, EMR, Machine learning, Prediction, Random forest

oncology Oncology

Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors.

In Artificial intelligence in medicine ; h5-index 34.0

Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays an important role in diagnosis and grading of brain tumors. Although manual DCE biomarker extraction algorithms boost the diagnostic yield of DCE-MRI by providing quantitative information on tumor prognosis and prediction, they are time-consuming and prone to human errors. In this paper, we propose a fully-automated, end-to-end system for DCE-MRI analysis of brain tumors. Our deep learning-powered technique does not require any user interaction, it yields reproducible results, and it is rigorously validated against benchmark and clinical data. Also, we introduce a cubic model of the vascular input function used for pharmacokinetic modeling which significantly decreases the fitting error when compared with the state of the art, alongside a real-time algorithm for determination of the vascular input region. An extensive experimental study, backed up with statistical tests, showed that our system delivers state-of-the-art results while requiring less than 3 min to process an entire input DCE-MRI study using a single GPU.

Nalepa Jakub, Ribalta Lorenzo Pablo, Marcinkiewicz Michal, Bobek-Billewicz Barbara, Wawrzyniak Pawel, Walczak Maksym, Kawulok Michal, Dudzik Wojciech, Kotowski Krzysztof, Burda Izabela, Machura Bartosz, Mrukwa Grzegorz, Ulrych Pawel, Hayball Michael P

2020-Jan

Brain, DCE-MRI, Deep neural network, Perfusion, Pharmacokinetic model, Tumor segmentation

General General

SemBioNLQA: A semantic biomedical question answering system for retrieving exact and ideal answers to natural language questions.

In Artificial intelligence in medicine ; h5-index 34.0

BACKGROUND AND OBJECTIVE : Question answering (QA), the identification of short accurate answers to users questions written in natural language expressions, is a longstanding issue widely studied over the last decades in the open-domain. However, it still remains a real challenge in the biomedical domain as the most of the existing systems support a limited amount of question and answer types as well as still require further efforts in order to improve their performance in terms of precision for the supported questions. Here, we present a semantic biomedical QA system named SemBioNLQA which has the ability to handle the kinds of yes/no, factoid, list, and summary natural language questions.

METHODS : This paper describes the system architecture and an evaluation of the developed end-to-end biomedical QA system named SemBioNLQA, which consists of question classification, document retrieval, passage retrieval and answer extraction modules. It takes natural language questions as input, and outputs both short precise answers and summaries as results. The SemBioNLQA system, dealing with four types of questions, is based on (1) handcrafted lexico-syntactic patterns and a machine learning algorithm for question classification, (2) PubMed search engine and UMLS similarity for document retrieval, (3) the BM25 model, stemmed words and UMLS concepts for passage retrieval, and (4) UMLS metathesaurus, BioPortal synonyms, sentiment analysis and term frequency metric for answer extraction.

RESULTS AND CONCLUSION : Compared with the current state-of-the-art biomedical QA systems, SemBioNLQA, a fully automated system, has the potential to deal with a large amount of question and answer types. SemBioNLQA retrieves quickly users' information needs by returning exact answers (e.g., "yes", "no", a biomedical entity name, etc.) and ideal answers (i.e., paragraph-sized summaries of relevant information) for yes/no, factoid and list questions, whereas it provides only the ideal answers for summary questions. Moreover, experimental evaluations performed on biomedical questions and answers provided by the BioASQ challenge especially in 2015, 2016 and 2017 (as part of our participation), show that SemBioNLQA achieves good performances compared with the most current state-of-the-art systems and allows a practical and competitive alternative to help information seekers find exact and ideal answers to their biomedical questions. The SemBioNLQA source code is publicly available at https://github.com/sarrouti/sembionlqa.

Sarrouti Mourad, Ouatik El Alaoui Said

2020-Jan

BioASQ, Biomedical informatics, Biomedical question answering, Information retrieval, Machine learning, Natural language processing, Passage retrieval

General General

DESIGN AND DEVELOPMENT OF HUMAN COMPUTER INTERFACE USING ELECTROOCULOGRAM WITH DEEP LEARNING.

In Artificial intelligence in medicine ; h5-index 34.0

Today's life assistive devices were playing significant role in our life to communicate with others. In that modality Human Computer Interface (HCI) based Electrooculogram (EOG) playing vital part. By using this method we can able to overcome the conventional methods in terms of performance and accuracy. To overcome such problem we analyze the EOG signal from twenty subjects to design nine states EOG based HCI using five electrodes system to measure the horizontal and vertical eye movements. Signals were preprocessed to remove the artifacts and extract the valuable information from collected data by using band power and Hilbert Huang Transform (HHT) and trained with Pattern Recognition Neural Network (PRNN) to classify the tasks. The classification results of 92.17% and 91.85% were shown for band power and HHT features using PRNN architecture. Recognition accuracy was analyzed in offline to identify the possibilities of designing HCI. We compare the two feature extraction techniques with PRNN to analyze the best method for classifying the tasks and recognizing single trail tasks to design the HCI. Our experimental result confirms that for classifying as well as recognizing accuracy of the collected signals using band power with PRNN shows better accuracy compared to other network used in this study. We compared the male subjects performance with female subjects to identify the performance. Finally we compared the male as well as female subjects in age group wise to identify the performance of the system. From that we concluded that male performance was appreciable compared with female subjects as well as age group between 26 to 32 performance and recognizing accuracy were high compared with other age groups used in this study.

Teng Geer, He Yue, Zhao Hengjun, Liu Dunhu, Xiao Jin, Ramkumar S

2020-Jan

Amyotrophic lateral sclerosis (ALS), Band Power (BP), Electrooculogram (EOG), Human Computer Interface (HCI), Pattern Recognition Neural Network (PRNN)

Surgery Surgery

State recognition of decompressive laminectomy with multiple information in robot-assisted surgery.

In Artificial intelligence in medicine ; h5-index 34.0

The decompressive laminectomy is a common operation for treatment of lumbar spinal stenosis. The tools for grinding and drilling are used for fenestration and internal fixation, respectively. The state recognition is one of the main technologies in robot-assisted surgery, especially in tele-surgery, because surgeons have limited perception during remote-controlled robot-assisted surgery. The novelty of this paper is that a state recognition system is proposed for the robot-assisted tele-surgery. By combining the learning methods and traditional methods, the robot from the slave-end can think about the current operation state like a surgeon, and provide more information and decision suggestions to the master-end surgeon, which aids surgeons work safer in tele-surgery. For the fenestration, we propose an image-based state recognition method that consists a U-Net derived network, grayscale redistribution and dynamic receptive field assisting in controlling the grinding process to prevent the grinding-bit from crossing the inner edge of the lamina to damage the spinal nerves. For the internal fixation, we propose an audio and force-based state recognition method that consists signal features extraction methods, LSTM-based prediction and information fusion assisting in monitoring the drilling process to prevent the drilling-bit from crossing the outer edge of the vertebral pedicle to damage the spinal nerves. Several experiments are conducted to show the reliability of the proposed system in robot-assisted surgery.

Sun Yu, Wang Li, Jiang Zhongliang, Li Bing, Hu Ying, Tian Wei

2020-Jan

Information fusion, Medical robot, Semantic segmentation, State recognition, Tele-surgery

General General

Clinical Decision Support Systems for Triage in the Emergency Department using Intelligent Systems: a Review.

In Artificial intelligence in medicine ; h5-index 34.0

MOTIVATION : Emergency Departments' (ED) modern triage systems implemented worldwide are solely based upon medical knowledge and experience. This is a limitation of these systems, since there might be hidden patterns that can be explored in big volumes of clinical historical data. Intelligent techniques can be applied to these data to develop clinical decision support systems (CDSS) thereby providing the health professionals with objective criteria. Therefore, it is of foremost importance to identify what has been hampering the application of such systems for ED triage.

OBJECTIVES : The objective of this paper is to assess how intelligent CDSS for triage have been contributing to the improvement of quality of care in the ED as well as to identify the challenges they have been facing regarding implementation.

METHODS : We applied a standard scoping review method with the manual search of 6 digital libraries, namely: ScienceDirect, IEEE Xplore, Google Scholar, Springer, MedlinePlus and Web of Knowledge. Search queries were created and customized for each digital library in order to acquire the information. The core search consisted of searching in the papers' title, abstract and key words for the topics "triage", "emergency department"/"emergency room" and concepts within the field of intelligent systems.

RESULTS : From the review search, we found that logistic regression was the most frequently used technique for model design and the area under the receiver operating curve (AUC) the most frequently used performance measure. Beside triage priority, the most frequently used variables for modelling were patients' age, gender, vital signs and chief complaints. The main contributions of the selected papers consisted in the improvement of a patient's prioritization, prediction of need for critical care, hospital or Intensive Care Unit (ICU) admission, ED Length of Stay (LOS) and mortality from information available at the triage.

CONCLUSIONS : In the papers where CDSS were validated in the ED, the authors found that there was an improvement in the health professionals' decision-making thereby leading to better clinical management and patients' outcomes. However, we found that more than half of the studies lacked this implementation phase. We concluded that for these studies, it is necessary to validate the CDSS and to define key performance measures in order to demonstrate the extent to which incorporation of CDSS at triage can actually improve care.

Fernandes Marta, Vieira Susana M, Leite Francisca, Palos Carlos, Finkelstein Stan, Sousa João M C

2020-Jan

CDSS, Critical care, EHR, Machine learning, Triage

Ophthalmology Ophthalmology

Ophthalmic diagnosis using deep learning with fundus images - A critical review.

In Artificial intelligence in medicine ; h5-index 34.0

An overview of the applications of deep learning for ophthalmic diagnosis using retinal fundus images is presented. We describe various retinal image datasets that can be used for deep learning purposes. Applications of deep learning for segmentation of optic disk, optic cup, blood vessels as well as detection of lesions are reviewed. Recent deep learning models for classification of diseases such as age-related macular degeneration, glaucoma, and diabetic retinopathy are also discussed. Important critical insights and future research directions are given.

Sengupta Sourya, Singh Amitojdeep, Leopold Henry A, Gulati Tanmay, Lakshminarayanan Vasudevan

2020-Jan

Classification, Deep learning, Fundus image datasets, Fundus photos, Image segmentation, Ophthalmology, Retina

Surgery Surgery

Skin cancer diagnosis based on optimized convolutional neural network.

In Artificial intelligence in medicine ; h5-index 34.0

Early detection of skin cancer is very important and can prevent some skin cancers, such as focal cell carcinoma and melanoma. Although there are several reasons that have bad impacts on the detection precision. Recently, the utilization of image processing and machine vision in medical applications is increasing. In this paper, a new image processing based method has been proposed for the early detection of skin cancer. The method utilizes an optimal Convolutional neural network (CNN) for this purpose. In this paper, improved whale optimization algorithm is utilized for optimizing the CNN. For evaluation of the proposed method, it is compared with some different methods on two different datasets. Simulation results show that the proposed method has superiority toward the other compared methods.

Zhang Ni, Cai Yi-Xin, Wang Yong-Yong, Tian Yi-Tao, Wang Xiao-Li, Badami Benjamin

2020-Jan

Convolutional neural networks, Deep learning, Lévy flight, Skin cancer diagnosis, Whale optimization algorithm

General General

Structural instability and divergence from conserved residues underlie intracellular retention of mammalian odorant receptors.

In Proceedings of the National Academy of Sciences of the United States of America ; h5-index 0.0

Mammalian odorant receptors are a diverse and rapidly evolving set of G protein-coupled receptors expressed in olfactory cilia membranes. Most odorant receptors show little to no cell surface expression in nonolfactory cells due to endoplasmic reticulum retention, which has slowed down biochemical studies. Here we provide evidence that structural instability and divergence from conserved residues of individual odorant receptors underlie intracellular retention using a combination of large-scale screening of odorant receptors cell surface expression in heterologous cells, point mutations, structural modeling, and machine learning techniques. We demonstrate the importance of conserved residues by synthesizing consensus odorant receptors that show high levels of cell surface expression similar to conventional G protein-coupled receptors. Furthermore, we associate in silico structural instability with poor cell surface expression using molecular dynamics simulations. We propose an enhanced evolutionary capacitance of olfactory sensory neurons that enable the functional expression of odorant receptors with cryptic mutations.

Ikegami Kentaro, de March Claire A, Nagai Maira H, Ghosh Soumadwip, Do Matthew, Sharma Ruchira, Bruguera Elise S, Lu Yueyang Eric, Fukutani Yosuke, Vaidehi Nagarajan, Yohda Masafumi, Matsunami Hiroaki

2020-Jan-23

GPCR, olfaction, protein trafficking

Public Health Public Health

Who is at risk of 13-valent conjugated pneumococcal vaccine failure?

In Vaccine ; h5-index 70.0

BACKGROUND : Despite high vaccine coverage rates in children and efficacy of pneumococcal conjugate vaccines, invasive pneumococcal disease (IPD) episodes due to serotypes included in the vaccine following completion of the recommended course of immunisation (i.e. vaccine failure) have been reported.

METHODS : We used data gathered from a population-based enhanced passive surveillance for IPD in children under 18 years of age in Massachusetts and an ensemble model composed of three machine-learning algorithms to predict probability of 13-valent pneumococcal conjugated vaccine (PCV13) failure and to evaluate potential associated features including age, underlying comorbidity, clinical presentation, and vaccine schedule. Vaccine failure was defined as diagnosis of IPD due to vaccine serotype (VST), in a child who received age recommended doses recommended by Advisory Committee of Immunization Practices.

RESULTS : During the 7-year study period, between April 01, 2010 and March 31, 2017, we identified 296 IPD cases. There were 107 (36%) IPD cases caused by VST, mostly serotype 19A (49, 17%), 7F (21, 7%), and 3 (18, 6%). Thirty-seven (34%) were in children who were completely vaccinated representing 13% of all IPD cases. Vaccine failure was more likely among children older than 60 months (predicted probability 0.40, observed prevalence 0.37, model prediction accuracy 79%), children presenting with pneumonia (predicted probability 0.27, observed prevalence 0.31, model accuracy 77%), and children with underlying comorbidity (predicted probability 0.24, observed prevalence 0.23, model accuracy 96%). Vaccine failure probability for those >60 months of age and had an underlying risk factor was 45% (observed prevalence 0.33, model accuracy 82%). The likelihood of vaccine failure was lowest among children who had completed 3 primary doses plus one booster dose PCV13 (predicted probability 0.14, observed prevalence 0.14, model prediction accuracy 100%).

CONCLUSION : PCV13 vaccine failure is more frequent among older children with underlying comorbidity, and among those who present with pneumococcal pneumonia. Our study provides a preliminary framework to predict the patterns of vaccine failures and may contribute to decision-making processes to optimize PCV immunization schedules.

Yildirim Melike, Keskinocak Pinar, Pelton Stephen, Pickering Larry, Yildirim Inci

2020-Jan-20

13-valent conjugated pneumococcal vaccine, Children, Vaccine failure

General General

Validation of deep-learning image reconstruction for coronary computed tomography angiography: Impact on noise, image quality and diagnostic accuracy.

In Journal of cardiovascular computed tomography ; h5-index 0.0

BACKGROUND : Advances in image reconstruction are necessary to decrease radiation exposure from coronary CT angiography (CCTA) further, but iterative reconstruction has been shown to degrade image quality at high levels. Deep-learning image reconstruction (DLIR) offers unique opportunities to overcome these limitations. The present study compared the impact of DLIR and adaptive statistical iterative reconstruction-Veo (ASiR-V) on quantitative and qualitative image parameters and the diagnostic accuracy of CCTA using invasive coronary angiography (ICA) as the standard of reference.

METHODS : This retrospective study includes 43 patients who underwent clinically indicated CCTA and ICA. Datasets were reconstructed with ASiR-V 70% (using standard [SD] and high-definition [HD] kernels) and with DLIR at different levels (i.e., medium [M] and high [H]). Image noise, image quality, and coronary luminal narrowing were evaluated by three blinded readers. Diagnostic accuracy was compared against ICA.

RESULTS : Noise did not significantly differ between ASiR-V SD and DLIR-M (37 vs. 37 HU, p = 1.000), but was significantly lower in DLIR-H (30 HU, p < 0.001) and higher in ASiR-V HD (53 HU, p < 0.001). Image quality was higher for DLIR-M and DLIR-H (3.4-3.8 and 4.2-4.6) compared to ASiR-V SD and HD (2.1-2.7 and 1.8-2.2; p < 0.001), with DLIR-H yielding the highest image quality. Consistently across readers, no significant differences in sensitivity (88% vs. 92%; p = 0.453), specificity (73% vs. 73%; p = 0.583) and diagnostic accuracy (80% vs. 82%; p = 0.366) were found between ASiR-V HD and DLIR-H.

CONCLUSION : DLIR significantly reduces noise in CCTA compared to ASiR-V, while yielding superior image quality at equal diagnostic accuracy.

Benz Dominik C, Benetos Georgios, Rampidis Georgios, von Felten Elia, Bakula Adam, Sustar Aleksandra, Kudura Ken, Messerli Michael, Fuchs Tobias A, Gebhard Catherine, Pazhenkottil Aju P, Kaufmann Philipp A, Buechel Ronny R

2020-Jan-13

ASiR-V, Adaptive statistical iterative reconstruction-veo, Coronary CT angiography, DLIR, Deep-learning image reconstruction, Diagnostic accuracy, Image quality

General General

Early-life stressful events and suicide attempt in schizophrenia: Machine learning models.

In Schizophrenia research ; h5-index 61.0

Childhood abuse and neglect predicts suicide attempt. Furthermore, other early-life stressful events may predict lifetime suicide attempt in psychiatric disorders. We assessed 189 schizophrenics for suicide attempt and stressful life events. Early-life stressful events were used as predictors of lifetime suicide attempt in three machine learning models. In our sample, 38% of the schizophrenics had at least one suicide attempt lifetime. The machine learning models provided an overall significant prediction (accuracy range: 62-69%). Childhood sexual molestation and mental illness were important predictors of suicide attempt. Early-life stressful events should be included in models aiming at predicting suicide attempt in schizophrenia.

Tasmim Samia, Dada Oluwagbenga, Wang Kevin Z, Bani-Fatemi Ali, Strauss John, Adanty Christopher, Graff Ariel, Gerretsen Philip, Zai Clement, Borlido Carol, De Luca Vincenzo

2020-Jan-20

Early-life adversities, Machine learning, Schizophrenia, Suicide, Traumatic events

General General

Pregnancy outcomes and perinatal complications of Asian mothers with juvenile idiopathic arthritis - a case-control registry study.

In Pediatric rheumatology online journal ; h5-index 0.0

BACKGROUNDS : In order to provide juvenile idiopathic arthritis (JIA) patients with better pre-conceptional and prenatal counselling, we investigated the obstetrical and neonatal outcomes among women with Asian descent.

METHODS : Through the linkage of Taiwan National Health Insurance database and National Birth Registry, we established a population-based birth cohort in Taiwan between 2004 and 2014. In a case control study design, first children born to mothers with JIA are identified and matched with 5 non-JIA controls by maternal age and birth year. Conditional logistic regression was used to calculate odds ratios for maternal and neonatal outcomes crude and with adjustment.

RESULTS : Of the 2,100,143 newborn, 778 (0.037%) were born to JIA mothers. Among them, 549 first-born children were included in this research. Our result suggested that babies born to mothers with JIA were more likely to have low birth body weight, with an adjusted OR of 1.35(95% CI: 1.02 to 1.79) when compared to babies born to mothers without. No differences were observed in other perinatal complications between women with and without JIA including stillbirth, prematurity, or small for gestational age. The rate of adverse obstetrical outcomes such as caesarean delivery, preeclampsia, gestational diabetes, postpartum hemorrhage and mortality were also similar between the two.

CONCLUSIONS : Adverse obstetrical and neonatal outcomes were limited among Asian mothers with JIA. Intensive care may not be necessary for JIA mothers and their newborns.

Zhang-Jian Shang Jun, Yang Huang-Yu, Chiu Meng-Jun, Chou I-Jun, Kuo Chang-Fu, Huang Jing-Long, Yeh Kuo-Wei, Wu Chao-Yi

2020-Jan-23

Juvenile idiopathic arthritis, Outcomes research, Pregnancy

General General

Identification of patients with atrial fibrillation, a big data exploratory analysis of the UK Biobank.

In Physiological measurement ; h5-index 36.0

Atrial Fibrillation (AF) is the most common cardiac arrhythmia, with an estimated prevalence of around 1.6% in the adult population. The analysis of the Electrocardiogram (ECG) data acquired in the UK Biobank represents an opportunity to screen for AF in a large sub-population in the UK. The main objective of this paper is to assess ten machine-learning methods for automated detection of subjects with AF in the UK Biobank dataset. Six classical machine-learning methods based on Support Vector Machines are proposed and compared with state-of-the-art techniques (including a deep-learning algorithm), and finally a combination of a classical machine-learning and deep learning approaches. Evaluation is carried out on a subset of the UK Biobank dataset, manually annotated by human experts. The combined classical machine-learning and deep learning method achieved an F1 score of 84.8% on the test subset, and a Cohen's Kappa coefficient of 0.83, which is similar to the inter-observer agreement of two human experts. The level of performance indicates that the automated detection of AF in patients whose data have been stored in a large database, such as the UK Biobank, is possible. Such automated identification of AF patients would enable further investigations aimed at identifying the different phenotypes associated with AF.

Oster Julien, Hopewell Jemma C, Ziberna Klemen, Wijesurendra Rohan, Camm Christian F, Casadei Barbara, Tarassenko Lionel

2020-Jan-24

Electrocardiogram, atrial fibrillation, big data, biobank, machine learning, signal processing

General General

A novel transcranial ultrasound imaging method with diverging wave transmission and deep learning approach.

In Computer methods and programs in biomedicine ; h5-index 0.0

Real time brain transcranial ultrasound imaging is extremely intriguing because of its numerous applications. However, the skull causes phase distortion and amplitude attenuation of ultrasound signals due to its density: the speed of sound is significantly different in bone tissue than in soft tissue. In this study, we propose an ultrafast transcranial ultrasound imaging technique with diverging wave (DW) transmission and a deep learning approach to achieve large field-of-view with high resolution and real time brain ultrasound imaging. DW transmission provides a frame rate of several kiloHz and a large field of view that is suitable for human brain imaging via a small acoustic window. However, it suffers from poor image quality because the diverging waves are all unfocused. Here, we adopted adaptive beamforming algorithms to improve both the image contrast and the lateral resolution. Both simulated and in situ experiments with a human skull resulted in significant image improvements. However, the skull still introduces a wavefront offset and distortion, which degrades the image quality even when adaptive beamforming methods are used. Thus, we also employed a U-Net neural network to detect the contour and position of the skull directly from the acquired RF signal matrix. This approach avoids the need for beamforming, image reconstruction, and image segmentation, making it more suitable for clinical use.

Du Bin, Wang Jinyan, Zheng Haoteng, Xiao Chenhui, Fang Siyuan, Lu Minhua, Mao Rui

2019-Dec-30

Adaptive beamforming, Coherence diverging wave compounding, Deep learning, Transcranial ultrasound imaging

General General

Revisiting the value of polysomnographic data in insomnia: more than meets the eye.

In Sleep medicine ; h5-index 0.0

BACKGROUND : Polysomnography (PSG) is not recommended as a diagnostic tool in insomnia. However, this consensual approach might be tempered in the light of two ongoing transformations in sleep research: big data and artificial intelligence (AI).

METHOD : We analyzed the PSG of 347 patients with chronic insomnia, including 59 with Sleep State Misperception (SSM) and 288 without (INS). 89 good sleepers (GS) were used as controls. PSGs were compared regarding: (1) macroscopic indexes derived from the hypnogram, (2) mesoscopic indexes extracted from the electroencephalographic (EEG) spectrum, (3) sleep microstructure (slow waves, spindles). We used supervised algorithms to differentiate patients from GS.

RESULTS : Macroscopic features illustrate the insomnia conundrum, with SSM patients displaying similar sleep metrics as GS, whereas INS patients show a deteriorated sleep. However, both SSM and INS patients showed marked differences in EEG spectral components (meso) compared to GS, with reduced power in the delta band and increased power in the theta/alpha, sigma and beta bands. INS and SSM patients showed decreased spectral slope in NREM. INS and SSM patients also differed from GS in sleep microstructure with fewer and slower slow waves and more and faster sleep spindles. Importantly, SSM and INS patients were almost indistinguishable at the meso and micro levels. Accordingly, unsupervised classifiers can reliably categorize insomnia patients and GS (Cohen's κ = 0.87) but fail to tease apart SSM and INS patients when restricting classifiers to micro and meso features (κ=0.004).

CONCLUSION : AI analyses of PSG recordings can help moving insomnia diagnosis beyond subjective complaints and shed light on the physiological substrate of insomnia.

Andrillon Thomas, Solelhac Geoffroy, Bouchequet Paul, Romano Francesco, Le Brun Max-Pol, Brigham Marco, Chennaoui Mounir, Léger Damien

2019-Dec-13

Artificial intelligence, Insomnia, Machine learning, NREM sleep, Polysomnography, REM

Public Health Public Health

Predicting major neurologic improvement and long-term outcome after thrombolysis using artificial neural networks.

In Journal of the neurological sciences ; h5-index 0.0

OBJECTIVE : To develop artificial neural network (ANN)-based functional outcome prediction models for patients with acute ischemic stroke (AIS) receiving intravenous thrombolysis based on immediate pretreatment parameters.

METHODS : The derived cohort consisted of 196 patients with AIS treated with intravenous thrombolysis between 2009 and 2017 at Shuang Ho Hospital in Taiwan. We evaluated the predictive value of parameters associated with major neurologic improvement (MNI) at 24 h after thrombolysis as well as the 3-month outcome. ANN models were applied for outcome prediction. The generalizability of the model was assessed through 5-fold cross-validation. The performance of the models was assessed according to the accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC), RESULTS: The parameters associated with MNI were blood pressure (BP), heart rate, glucose level, consciousness level, National Institutes of Health Stroke Scale (NIHSS) score, and history of diabetes mellitus (DM). The parameters associated with the 3-month outcome were age, consciousness level, BP, glucose level, hemoglobin A1c, history of DM, stroke subtype, and NIHSS score. After adequate training, ANN Model 1 to predict MNI achieved an AUC of 0.944. Accuracy, sensitivity, and specificity were 94.6%, 89.8%, and 95.9%, respectively. ANN Model 2 to predict the 3-month outcome achieved an AUC of 0.933, with accuracy, sensitivity, and specificity of 88.8%, 94.7%, and 86.5%, respectively.

CONCLUSIONS : The ANN-based models achieved reliable performance to predict MNI and 3-month outcomes after thrombolysis for AIS. The models proposed have clinical value to assist in decision-making, especially when invasive adjuvant strategies are considered.

Chung Chen-Chih, Hong Chien-Tai, Huang Yao-Hsien, Su Emily Chia-Yu, Chan Lung, Hu Chaur-Jong, Chiu Hung-Wen

2020-Jan-03

Artificial intelligence, Artificial neural network, Outcome, Prediction, Stroke, Thrombolysis

Cardiology Cardiology

Dynamic coronary roadmapping via catheter tip tracking in X-ray fluoroscopy with deep learning based Bayesian filtering.

In Medical image analysis ; h5-index 0.0

Percutaneous coronary intervention (PCI) is typically performed with image guidance using X-ray angiograms in which coronary arteries are opacified with X-ray opaque contrast agents. Interventional cardiologists typically navigate instruments using non-contrast-enhanced fluoroscopic images, since higher use of contrast agents increases the risk of kidney failure. When using fluoroscopic images, the interventional cardiologist needs to rely on a mental anatomical reconstruction. This paper reports on the development of a novel dynamic coronary roadmapping approach for improving visual feedback and reducing contrast use during PCI. The approach compensates cardiac and respiratory induced vessel motion by ECG alignment and catheter tip tracking in X-ray fluoroscopy, respectively. In particular, for accurate and robust tracking of the catheter tip, we proposed a new deep learning based Bayesian filtering method that integrates the detection outcome of a convolutional neural network and the motion estimation between frames using a particle filtering framework. The proposed roadmapping and tracking approaches were validated on clinical X-ray images, achieving accurate performance on both catheter tip tracking and dynamic coronary roadmapping experiments. In addition, our approach runs in real-time on a computer with a single GPU and has the potential to be integrated into the clinical workflow of PCI procedures, providing cardiologists with visual guidance during interventions without the need of extra use of contrast agent.

Ma Hua, Smal Ihor, Daemen Joost, Walsum Theo van

2020-Jan-11

Bayesian filtering, Catheter tip tracking, Deep learning, Dynamic coronary roadmapping, Particle filter, X-ray fluoroscopy

General General

Elucidating the Effect of Static Electric Field on Amyloid Beta 1-42 Supramolecular Assembly.

In Journal of molecular graphics & modelling ; h5-index 0.0

Amyloid-β (Aβ) aggregation is recognized to be a key toxic factor in the pathogenesis of Alzheimer disease, which is the most common progressive neurodegenerative disorder. In vitro experiments have elucidated that Aβ aggregation depends on several factors, such as pH, temperature and peptide concentration. Despite the research effort in this field, the fundamental mechanism responsible for the disease progression is still unclear. Recent research has proposed the application of electric fields as a non-invasive therapeutic option leading to the disruption of amyloid fibrils. In this regard, a molecular level understanding of the interactions governing the destabilization mechanism represents an important research advancement. Understanding the electric field effects on proteins, provides a more in-depth comprehension of the relationship between protein conformation and electrostatic dipole moment. The present study focuses on investigating the effect of static Electric Field (EF) on the conformational dynamics of Aβ fibrils by all-atom Molecular Dynamics (MD) simulations. The outcome of this work provides novel insight into this research field, demonstrating how the Aβ assembly may be destabilized by the applied EF.

Muscat S, Stojceski F, Danani A

2020-Jan-12

Alzheimer disease, Amyloid beta, Computational modelling, Dipole moment, Fibril detachment, Molecular dynamics, Static electric field, Supramolecular assembly

General General

Efficient identification of novel anti-glioma lead compounds by machine learning models.

In European journal of medicinal chemistry ; h5-index 72.0

Glioblastoma multiforme (GBM) is the most devastating and widespread primary central nervous system tumor. Pharmacological treatment of this malignance is limited by the selective permeability of the blood-brain barrier (BBB) and relies on a single drug, temozolomide (TMZ), thus making the discovery of new compounds challenging and urgent. Therefore, aiming to discover new anti-glioma drugs, we developed robust machine learning models for predicting anti-glioma activity and BBB penetration ability of new compounds. Using these models, we prioritized 41 compounds from our in-house library of compounds, for further in vitro testing against three glioma cell lines and astrocytes. Subsequently, the most potent and selective compounds were resynthesized and tested in vivo using an orthotopic glioma model. This approach revealed two lead candidates, 4m and 4n, which efficiently decreased malignant glioma development in mice, probably by inhibiting thioredoxin reductase activity, as shown by our enzymological assays. Moreover, these two compounds did not promote body weight reduction, death of animals, or altered hematological and toxicological markers, making then good candidates for lead optimization as anti-glioma drug candidates.

Neves Bruno Junior, Agnes Jonathan Paulo, Gomes Marcelo do Nascimento, Henriques Donza Marcio Roberto, Gonçalves Rosângela Mayer, Delgobo Marina, Ribeiro de Souza Neto Lauro, Senger Mario Roberto, Silva-Junior Floriano Paes, Ferreira Sabrina Baptista, Zanotto-Filho Alfeu, Andrade Carolina Horta

2019-Dec-19

Cancer, Glioblastoma, Machine learning, Orthotopic glioma model, Predictive modeling, Thioredoxin reductase

Internal Medicine Internal Medicine

Treatment Stratification of Patients with Metastatic Castration-Resistant Prostate Cancer by Machine Learning.

In iScience ; h5-index 0.0

Prostate cancer is the most common cancer in men in the Western world. One-third of the patients with prostate cancer will develop resistance to hormonal therapy and progress into metastatic castration-resistant prostate cancer (mCRPC). Currently, docetaxel is a preferred treatment for mCRPC. However, about 20% of the patients will undergo early therapeutic failure owing to adverse events induced by docetaxel-based chemotherapy. There is an emergent need for a computational model that can accurately stratify patients into docetaxel-tolerable and docetaxel-intolerable groups. Here we present the best-performing algorithm in the Prostate Cancer DREAM Challenge for predicting adverse events caused by docetaxel treatment. We integrated the survival status and severity of adverse events into our model, which is an innovative way to complement and stratify the treatment discontinuation information. Critical stratification biomarkers were further identified in determining the treatment discontinuation. Our model has the potential to improve future personalized treatment in mCRPC.

Deng Kaiwen, Li Hongyang, Guan Yuanfang

2019-Dec-26

Algorithms, Artificial Intelligence, Bioinformatics, Cancer

General General

Atomic resolution convergent beam electron diffraction analysis using convolutional neural networks.

In Ultramicroscopy ; h5-index 0.0

Two types of convolutional neural network (CNN) models, a discrete classification network and a continuous regression network, were trained to determine local sample thickness from convergent beam diffraction (CBED) patterns of SrTiO3 collected in a scanning transmission electron microscope (STEM) at atomic column resolution. Acquisition of atomic resolution CBED patterns for this purpose requires careful balancing of CBED feature size in pixels, acquisition speed, and detector dynamic range. The training datasets were derived from multislice simulations, which must be convolved with incoherent source broadening. Sample thicknesses were also determined using quantitative high-angle annular dark-field (HAADF) STEM images acquired simultaneously. The regression CNN performed well on sample thinner than 35 nm, with 70% of the CNN results within 1 nm of HAADF thickness, and 1.0 nm overall root mean square error between the two measurements. The classification CNN was trained for a thicknesses up to 100 nm and yielded 66% of CNN results within one classification increment of 2 nm of HAADF thickness. Our approach depends on methods from computer vision including transfer learning and image augmentation.

Zhang Chenyu, Feng Jie, DaCosta Luis Rangel, Voyles Paul M

2019-Dec-23

Convergent beam electron diffraction, Convolutional neural network, Deep learning, Machine learning, Scanning transmission electron microscopy

Public Health Public Health

Artificial intelligence approaches to predicting and detecting cognitive decline in older adults: A conceptual review.

In Psychiatry research ; h5-index 64.0

Preserving cognition and mental capacity is critical to aging with autonomy. Early detection of pathological cognitive decline facilitates the greatest impact of restorative or preventative treatments. Artificial Intelligence (AI) in healthcare is the use of computational algorithms that mimic human cognitive functions to analyze complex medical data. AI technologies like machine learning (ML) support the integration of biological, psychological, and social factors when approaching diagnosis, prognosis, and treatment of disease. This paper serves to acquaint clinicians and other stakeholders with the use, benefits, and limitations of AI for predicting, diagnosing, and classifying mild and major neurocognitive impairments, by providing a conceptual overview of this topic with emphasis on the features explored and AI techniques employed. We present studies that fell into six categories of features used for these purposes: (1) sociodemographics; (2) clinical and psychometric assessments; (3) neuroimaging and neurophysiology; (4) electronic health records and claims; (5) novel assessments (e.g., sensors for digital data); and (6) genomics/other omics. For each category we provide examples of AI approaches, including supervised and unsupervised ML, deep learning, and natural language processing. AI technology, still nascent in healthcare, has great potential to transform the way we diagnose and treat patients with neurocognitive disorders.

Graham Sarah A, Lee Ellen E, Jeste Dilip V, Van Patten Ryan, Twamley Elizabeth W, Nebeker Camille, Yamada Yasunori, Kim Ho-Cheol, Depp Colin A

2019-Dec-09

Dementia, Machine learning, Mild cognitive impairment, Natural language processing, Sensors

General General

A Computer Vision System Based on Majority-Voting Ensemble Neural Network for the Automatic Classification of Three Chickpea Varieties.

In Foods (Basel, Switzerland) ; h5-index 0.0

Since different varieties of crops have specific applications, it is therefore important to properly identify each cultivar, in order to avoid fake varieties being sold as genuine, i.e., fraud. Despite that properly trained human experts might accurately identify and classify crop varieties, computer vision systems are needed since conditions such as fatigue, reproducibility, and so on, can influence the expert's judgment and assessment. Chickpea (Cicer arietinum L.) is an important legume at the world-level and has several varieties. Three chickpea varieties with a rather similar visual appearance were studied here: Adel, Arman, and Azad chickpeas. The purpose of this paper is to present a computer vision system for the automatic classification of those chickpea varieties. First, segmentation was performed using an Hue Saturation Intensity (HSI) color space threshold. Next, color and textural (from the gray level co-occurrence matrix, GLCM) properties (features) were extracted from the chickpea sample images. Then, using the hybrid artificial neural network-cultural algorithm (ANN-CA), the sub-optimal combination of the five most effective properties (mean of the RGB color space components, mean of the HSI color space components, entropy of GLCM matrix at 90°, standard deviation of GLCM matrix at 0°, and mean third component in YCbCr color space) were selected as discriminant features. Finally, an ANN-PSO/ACO/HS majority voting (MV) ensemble methodology merging three different classifier outputs, namely the hybrid artificial neural network-particle swarm optimization (ANN-PSO), hybrid artificial neural network-ant colony optimization (ANN-ACO), and hybrid artificial neural network-harmonic search (ANN-HS), was used. Results showed that the ensemble ANN-PSO/ACO/HS-MV classifier approach reached an average classification accuracy of 99.10 ± 0.75% over the test set, after averaging 1000 random iterations.

Pourdarbani Razieh, Sabzi Sajad, Kalantari Davood, Hernández-Hernández José Luis, Arribas Juan Ignacio

2020-Jan-21

Cicer arietinum L., chickpea, classification, computer vision, feature selection, hybrid ANN, image processing, legume, machine learning, majority voting, segmentation

General General

Odor-induced emotion recognition based on average frequency band division of EEG signals.

In Journal of neuroscience methods ; h5-index 0.0

BACKGROUND : Emotion recognition plays a key role in multimedia. To enhance the sensation of reality, smell has been incorporated into multimedia systems because it can directly stimulate memories and trigger strong emotions.

NEW METHOD : For the recognition of olfactory-induced emotions, this study explored a combination method using a support vector machine (SVM) with an average frequency band division (AFBD) method, where the AFBD method was proposed to extract the power-spectral-density (PSD) features from electroencephalogram (EEG) signals induced by smelling different odors. The so-called AFBD method means that each PSD feature was calculated based on equal frequency bandwidths, rather than the traditional EEG rhythm-based bandwidth. Thirteen odors were used to induce olfactory EEGs and their corresponding emotions. These emotions were then divided into two types of emotions, pleasure and disgust, or five types of emotions that were very unpleasant, slightly unpleasant, neutral, slightly pleasant, and very pleasant.

RESULTS : Comparison between the proposed SVM plus AFBD method and other methods found average accuracies of 98.9% and 88.5% for two- and five-emotion recognition, respectively. These values were considerably higher than those of other combination methods, such as the combinations of AFBD or EEG rhythm-based features with naive Bayesian, k-nearest neighbor classification, voting-extreme learning machine, and backpropagation neural network methods.

CONCLUSIONS : The SVM plus AFBD method represents a useful contribution to olfactory-induced emotion recognition. Classification of the five-emotion categories was generally inferior to the classification of the two-emotion categories, suggesting that the recognition performance decreased as the number of emotions in the category increased.

Hou Hui-Rang, Zhang Xiao-Nei, Meng Qing-Hao

2020-Jan-21

Emotion recognition, average frequency band division, machine learning, olfactory EEG

Dermatology Dermatology

Ciliation index is a useful diagnostic tool in challenging spitzoid melanocytic neoplasms.

In The Journal of investigative dermatology ; h5-index 0.0

The loss of primary cilia on melanocytes is a useful biomarker for the distinction of melanoma from conventional melanocytic nevi. It is unknown whether ciliation status is beneficial for diagnosing spitzoid tumors - a subclass of melanomas that present inherently ambiguous histology and are challenging to classify. We evaluated ciliation index (CI) in 68 cases of spitzoid tumors ranging from Spitz nevi (SN) and atypical Spitz tumors (AST) to spitzoid melanoma (SM). We found a significant decrease in CI within the SM group when compared to either the SN or AST groups. We additionally used a machine-learning based algorithm to determine the value of CI when considered in combination with other histopathologic and molecular features commonly used for diagnosis. We found that a low CI was consistently ranked as a top predictive feature in the diagnosis of malignancy. Predictive models trained on only the top four predictive features (CI, asymmetry, hyperchromatism and cytological atypia) out-performed standard histological assessment in an independent validation cohort of 56 additional cases. The results provide an alternative approach to evaluate diagnostically challenging melanocytic lesions, and further support the use of CI as an ancillary diagnostic test.

Lang Ursula E, Torres Rodrigo, Cheung Christine, Vladar Eszter K, McCalmont Timothy H, Kim Jinah, Judson-Torres Robert L

2020-Jan-21

Atypical Spitz, Spitz nevi, melanoma, primary cilia, spitzoid

Public Health Public Health

An automated alarm system for food safety by using electronic invoices.

In PloS one ; h5-index 176.0

BACKGROUND : Invoices had been used in food product traceability, however, none have addressed the automated alarm system for food safety by utilizing electronic invoice big data. In this paper, we present an alarm system for edible oil manufacture that can prevent a food safety crisis rather than trace problematic sources post-crisis.

MATERIALS AND METHODS : Using nearly 100 million labeled e-invoices from the 2013‒2014 of 595 edible oil manufacturers provided by Ministry of Finance, we applied text-mining, statistical and machine learning techniques to "train" the system for two functions: (1) to sieve edible oil-related e-invoices of manufacturers who may also produce other merchandise and (2) to identify suspicious edible oil manufacture based on irrational transactions from the e-invoices sieved.

RESULTS : The system was able to (1) accurately sieve the correct invoices with sensitivity >95% and specificity >98% via text classification and (2) identify problematic manufacturers with 100% accuracy via Random Forest machine learning method, as well as with sensitivity >70% and specificity >99% through simple decision-tree method.

CONCLUSION : E-invoice has bright future on the application of food safety. It can not only be used for product traceability, but also prevention of adverse events by flag suspicious manufacturers. Compulsory usage of e-invoice for food producing can increase the accuracy of this alarm system.

Chang Wan-Tzu, Yeh Yen-Po, Wu Hong-Yi, Lin Yu-Fen, Dinh Thai Son, Lian Ie-Bin

2020

Radiology Radiology

Improved multi-parametric prediction of tissue outcome in acute ischemic stroke patients using spatial features.

In PloS one ; h5-index 176.0

INTRODUCTION : In recent years, numerous methods have been proposed to predict tissue outcome in acute stroke patients using machine learning methods incorporating multiparametric imaging data. Most methods include diffusion and perfusion parameters as image-based parameters but do not include any spatial information although these parameters are spatially dependent, e.g. different perfusion properties in white and gray brain matter. This study aims to investigate if including spatial features improves the accuracy of multi-parametric tissue outcome prediction.

MATERIALS AND METHODS : Acute and follow-up multi-center MRI datasets of 99 patients were available for this study. Logistic regression, random forest, and XGBoost machine learning models were trained and tested using acute MR diffusion and perfusion features and known follow-up lesions. Different combinations of atlas coordinates and lesion probability maps were included as spatial information. The stroke lesion predictions were compared to the true tissue outcomes using the area under the receiver operating characteristic curve (ROC AUC) and the Dice metric.

RESULTS : The statistical analysis revealed that including spatial features significantly improves the tissue outcome prediction. Overall, the XGBoost and random forest models performed best in every setting and achieved state-of-the-art results regarding both metrics with similar improvements achieved including Montreal Neurological Institute (MNI) reference space coordinates or voxel-wise lesion probabilities.

CONCLUSION : Spatial features should be integrated to improve lesion outcome prediction using machine learning models.

Grosser Malte, Gellißen Susanne, Borchert Patrick, Sedlacik Jan, Nawabi Jawed, Fiehler Jens, Forkert Nils Daniel

2020

Cardiology Cardiology

Machine learning detection of Atrial Fibrillation using wearable technology.

In PloS one ; h5-index 176.0

BACKGROUND : Atrial Fibrillation is the most common arrhythmia worldwide with a global age adjusted prevalence of 0.5% in 2010. Anticoagulation treatment using warfarin or direct oral anticoagulants is effective in reducing the risk of AF-related stroke by approximately two-thirds and can provide a 10% reduction in overall mortality. There has been increased interest in detecting AF due to its increased incidence and the possibility to prevent AF-related strokes. Inexpensive consumer devices which measure the ECG may have the potential to accurately detect AF but do not generally incorporate diagnostic algorithms. Machine learning algorithms have the potential to improve patient outcomes particularly where diagnoses are made from large volumes or complex patterns of data such as in AF.

METHODS : We designed a novel AF detection algorithm using a de-correlated Lorenz plot of 60 consecutive RR intervals. In order to reduce the volume of data, the resulting images were compressed using a wavelet transformation (JPEG200 algorithm) and the compressed images were used as input data to a Support Vector Machine (SVM) classifier. We used the Massachusetts Institute of Technology (MIT)-Beth Israel Hospital (BIH) Atrial Fibrillation database and the MIT-BIH Arrhythmia database as training data and verified the algorithm performance using RR intervals collected using an inexpensive consumer heart rate monitor device (Polar-H7) in a case-control study.

RESULTS : The SVM algorithm yielded excellent discrimination in the training data with a sensitivity of 99.2% and a specificity of 99.5% for AF. In the validation data, the SVM algorithm correctly identified AF in 79/79 cases; sensitivity 100% (95% CI 95.4%-100%) and non-AF in 328/336 cases; specificity 97.6% (95% CI 95.4%-99.0%).

CONCLUSIONS : An inexpensive wearable heart rate monitor and machine learning algorithm can be used to detect AF with very high accuracy and has the capability to transmit ECG data which could be used to confirm AF. It could potentially be used for intermittent screening or continuously for prolonged periods to detect paroxysmal AF. Further work could lead to cost-effective and accurate estimation of AF burden and improved risk stratification in AF.

Lown Mark, Brown Michael, Brown Chloë, Yue Arthur M, Shah Benoy N, Corbett Simon J, Lewith George, Stuart Beth, Moore Michael, Little Paul

2020

Radiology Radiology

Improvement of classification performance of Parkinson's disease using shape features for machine learning on dopamine transporter single photon emission computed tomography.

In PloS one ; h5-index 176.0

OBJECTIVE : To assess the classification performance between Parkinson's disease (PD) and normal control (NC) when semi-quantitative indicators and shape features obtained on dopamine transporter (DAT) single photon emission computed tomography (SPECT) are combined as a feature of machine learning (ML).

METHODS : A total of 100 cases of both PD and normal control (NC) from the Parkinson's Progression Markers Initiative database were evaluated. A summed image was generated and regions of interests were set to the left and right striata. Area, equivalent diameter, major axis length, minor axis length, perimeter and circularity were calculated as shape features. Striatum binding ratios (SBRputamen and SBRcaudate) were used as comparison features. The classification performance of the PD and NC groups according to receiver operating characteristic analysis of the shape features was compared in terms of SBRs. Furthermore, we compared the classification performance of ML when shape features or SBRs were used alone and in combination.

RESULTS : The shape features (except minor axis length) and SBRs indicated significant differences between the NC and PD groups (p < 0.05). The top five areas under the curves (AUC) were as follows: circularity (0.972), SBRputamen (0.972), major axis length (0.945), SBRcaudate (0.928) and perimeter (0.896). When classification was done using ML, AUC was as follows: circularity and SBRs (0.995), circularity alone (0.990), and SBRs (0.973). The classification performance was significantly improved by combining SBRs and circularity than by SBRs alone (p = 0.018).

CONCLUSION : We found that the circularity obtained from DAT-SPECT images could help in distinguishing NC and PD. Furthermore, the classification performance of ML was significantly improved using circularity in SBRs together.

Shiiba Takuro, Arimura Yuki, Nagano Miku, Takahashi Tenma, Takaki Akihiro

2020

General General

Performance of Qure.ai automatic classifiers against a large annotated database of patients with diverse forms of tuberculosis.

In PloS one ; h5-index 176.0

Availability of trained radiologists for fast processing of CXRs in regions burdened with tuberculosis always has been a challenge, affecting both timely diagnosis and patient monitoring. The paucity of annotated images of lungs of TB patients hampers attempts to apply data-oriented algorithms for research and clinical practices. The TB Portals Program database (TBPP, https://TBPortals.niaid.nih.gov) is a global collaboration curating a large collection of the most dangerous, hard-to-cure drug-resistant tuberculosis (DR-TB) patient cases. TBPP, with 1,179 (83%) DR-TB patient cases, is a unique collection that is well positioned as a testing ground for deep learning classifiers. As of January 2019, the TBPP database contains 1,538 CXRs, of which 346 (22.5%) are annotated by a radiologist and 104 (6.7%) by a pulmonologist-leaving 1,088 (70.7%) CXRs without annotations. The Qure.ai qXR artificial intelligence automated CXR interpretation tool, was blind-tested on the 346 radiologist-annotated CXRs from the TBPP database. Qure.ai qXR CXR predictions for cavity, nodule, pleural effusion, hilar lymphadenopathy was successfully matching human expert annotations. In addition, we tested the 12 Qure.ai classifiers to find whether they correlate with treatment success (information provided by treating physicians). Ten descriptors were found as significant: abnormal CXR (p = 0.0005), pleural effusion (p = 0.048), nodule (p = 0.0004), hilar lymphadenopathy (p = 0.0038), cavity (p = 0.0002), opacity (p = 0.0006), atelectasis (p = 0.0074), consolidation (p = 0.0004), indicator of TB disease (p = < .0001), and fibrosis (p = < .0001). We conclude that applying fully automated Qure.ai CXR analysis tool is useful for fast, accurate, uniform, large-scale CXR annotation assistance, as it performed well even for DR-TB cases that were not used for initial training. Testing artificial intelligence algorithms (encapsulating both machine learning and deep learning classifiers) on diverse data collections, such as TBPP, is critically important toward progressing to clinically adopted automatic assistants for medical data analysis.

Engle Eric, Gabrielian Andrei, Long Alyssa, Hurt Darrell E, Rosenthal Alex

2020

Ophthalmology Ophthalmology

Retinal Vascular Signs and Cerebrovascular Diseases.

In Journal of neuro-ophthalmology : the official journal of the North American Neuro-Ophthalmology Society ; h5-index 0.0

BACKGROUND : Cerebrovascular disease (CeVD), including stroke, is a leading cause of death globally. The retina is an extension of the cerebrum, sharing embryological and vascular pathways. The association between different retinal signs and CeVD has been extensively evaluated. In this review, we summarize recent studies which have examined this association.

EVIDENCE ACQUISITION : We searched 6 databases through July 2019 for studies evaluating the link between retinal vascular signs and diseases with CeVD. CeVD was classified into 2 groups: clinical CeVD (including clinical stroke, silent cerebral infarction, cerebral hemorrhage, and stroke mortality), and sub-clinical CeVD (including MRI-defined lacunar infarct and white matter lesions [WMLs]). Retinal vascular signs were classified into 3 groups: classic hypertensive retinopathy (including retinal microaneurysms, retinal microhemorrhage, focal/generalized arteriolar narrowing, cotton-wool spots, and arteriovenous nicking), clinical retinal diseases (including diabetic retinopathy [DR], age-related macular degeneration [AMD], retinal vein occlusion, retinal artery occlusion [RAO], and retinal emboli), and retinal vascular imaging measures (including retinal vessel diameter and geometry). We also examined emerging retinal vascular imaging measures and the use of artificial intelligence (AI) deep learning (DL) techniques.

RESULTS : Hypertensive retinopathy signs were consistently associated with clinical CeVD and subclinical CeVD subtypes including subclinical cerebral large artery infarction, lacunar infarction, and WMLs. Some clinical retinal diseases such as DR, retinal arterial and venous occlusion, and transient monocular vision loss are consistently associated with clinical CeVD. There is an increased risk of recurrent stroke immediately after RAO. Less consistent associations are seen with AMD. Retinal vascular imaging using computer assisted, semi-automated software to measure retinal vascular caliber and other parameters (tortuosity, fractal dimension, and branching angle) has shown strong associations to clinical and subclinical CeVD. Other new retinal vascular imaging techniques (dynamic retinal vessel analysis, adaptive optics, and optical coherence tomography angiography) are emerging technologies in this field. Application of AI-DL is expected to detect subclinical retinal changes and discrete retinal features in predicting systemic conditions including CeVD.

CONCLUSIONS : There is extensive and increasing evidence that a range of retinal vascular signs and disease are closely linked to CeVD, including subclinical and clinical CeVD. New technology including AI-DL will allow further translation to clinical utilization.

Rim Tyler Hyungtaek, Teo Alvin Wei Jun, Yang Henrik Hee Seung, Cheung Carol Y, Wong Tien Yin

2020-Jan-17

Radiology Radiology

Deep Learning Approach for Generating MRA Images From 3D Quantitative Synthetic MRI Without Additional Scans.

In Investigative radiology ; h5-index 46.0

OBJECTIVES : Quantitative synthetic magnetic resonance imaging (MRI) enables synthesis of various contrast-weighted images as well as simultaneous quantification of T1 and T2 relaxation times and proton density. However, to date, it has been challenging to generate magnetic resonance angiography (MRA) images with synthetic MRI. The purpose of this study was to develop a deep learning algorithm to generate MRA images based on 3D synthetic MRI raw data.

MATERIALS AND METHODS : Eleven healthy volunteers and 4 patients with intracranial aneurysms were included in this study. All participants underwent a time-of-flight (TOF) MRA sequence and a 3D-QALAS synthetic MRI sequence. The 3D-QALAS sequence acquires 5 raw images, which were used as the input for a deep learning network. The input was converted to its corresponding MRA images by a combination of a single-convolution and a U-net model with a 5-fold cross-validation, which were then compared with a simple linear combination model. Image quality was evaluated by calculating the peak signal-to-noise ratio (PSNR), structural similarity index measurements (SSIMs), and high frequency error norm (HFEN). These calculations were performed for deep learning MRA (DL-MRA) and linear combination MRA (linear-MR), relative to TOF-MRA, and compared with each other using a nonparametric Wilcoxon signed-rank test. Overall image quality and branch visualization, each scored on a 5-point Likert scale, were blindly and independently rated by 2 board-certified radiologists.

RESULTS : Deep learning MRA was successfully obtained in all subjects. The mean PSNR, SSIM, and HFEN of the DL-MRA were significantly higher, higher, and lower, respectively, than those of the linear-MRA (PSNR, 35.3 ± 0.5 vs 34.0 ± 0.5, P < 0.001; SSIM, 0.93 ± 0.02 vs 0.82 ± 0.02, P < 0.001; HFEN, 0.61 ± 0.08 vs 0.86 ± 0.05, P < 0.001). The overall image quality of the DL-MRA was comparable to that of TOF-MRA (4.2 ± 0.7 vs 4.4 ± 0.7, P = 0.99), and both types of images were superior to that of linear-MRA (1.5 ± 0.6, for both P < 0.001). No significant differences were identified between DL-MRA and TOF-MRA in the branch visibility of intracranial arteries, except for ophthalmic artery (1.2 ± 0.5 vs 2.3 ± 1.2, P < 0.001).

CONCLUSIONS : Magnetic resonance angiography generated by deep learning from 3D synthetic MRI data visualized major intracranial arteries as effectively as TOF-MRA, with inherently aligned quantitative maps and multiple contrast-weighted images. Our proposed algorithm may be useful as a screening tool for intracranial aneurysms without requiring additional scanning time.

Fujita Shohei, Hagiwara Akifumi, Otsuka Yujiro, Hori Masaaki, Takei Naoyuki, Hwang Ken-Pin, Irie Ryusuke, Andica Christina, Kamagata Koji, Akashi Toshiaki, Kunishima Kumamaru Kanako, Suzuki Michimasa, Wada Akihiko, Abe Osamu, Aoki Shigeki

2020-Jan-20

General General

Deep-Learning Generated Synthetic Double Inversion Recovery Images Improve Multiple Sclerosis Lesion Detection.

In Investigative radiology ; h5-index 46.0

OBJECTIVES : The aim of the study was to implement a deep-learning tool to produce synthetic double inversion recovery (synthDIR) images and compare their diagnostic performance to conventional sequences in patients with multiple sclerosis (MS).

MATERIALS AND METHODS : For this retrospective analysis, 100 MS patients (65 female, 37 [22-68] years) were randomly selected from a prospective observational cohort between 2014 and 2016. In a subset of 50 patients, an artificial neural network (DiamondGAN) was trained to generate a synthetic DIR (synthDIR) from standard acquisitions (T1, T2, and fluid-attenuated inversion recovery [FLAIR]). With the resulting network, synthDIR was generated for the remaining 50 subjects. These images as well as conventionally acquired DIR (trueDIR) and FLAIR images were assessed for MS lesions by 2 independent readers, blinded to the source of the DIR image. Lesion counts in the different modalities were compared using a Wilcoxon signed-rank test, and interrater analysis was performed. Contrast-to-noise ratios were compared for objective image quality.

RESULTS : Utilization of synthDIR allowed to detect significantly more lesions compared with the use of FLAIR images (31.4 ± 20.7 vs 22.8 ± 12.7, P < 0.001). This improvement was mainly attributable to an improved depiction of juxtacortical lesions (12.3 ± 10.8 vs 7.2 ± 5.6, P < 0.001). Interrater reliability was excellent in FLAIR 0.92 (95% confidence interval [CI], 0.85-0.95), synthDIR 0.93 (95% CI, 0.87-0.96), and trueDIR 0.95 (95% CI, 0.85-0.98).Contrast-to-noise ratio in synthDIR exceeded that of FLAIR (22.0 ± 6.4 vs 16.7 ± 3.6, P = 0.009); no significant difference was seen in comparison to trueDIR (22.0 ± 6.4 vs 22.4 ± 7.9, P = 0.87).

CONCLUSIONS : Computationally generated DIR images improve lesion depiction compared with the use of standard modalities. This method demonstrates how artificial intelligence can help improving imaging in specific pathologies.

Finck Tom, Li Hongwei, Grundl Lioba, Eichinger Paul, Bussas Matthias, Mühlau Mark, Menze Bjoern, Wiestler Benedikt

2020-Jan-21

General General

Air quality prediction at new stations using spatially transferred bi-directional long short-term memory network.

In The Science of the total environment ; h5-index 0.0

In the last decades, air pollution has been a critical environmental issue, especially in developing countries like China. The governments and scholars have spent lots of effort on controlling air pollution and mitigating its impacts on human society. Accurate prediction of air quality can provide essential decision-making supports, and therefore, scholars have proposed various kinds of models and methods for air quality forecastings, such as statistical methods, machine learning methods, and deep learning methods. Deep learning-based networks, such as RNN and LSTM, have been reported to achieve good performance in recent studies. However, the excellent performance of these methods requires sufficient data to train the model. For stations that lack data, such as newly built monitoring stations, the performance of those methods is constrained. Therefore, a methodology that could address the data shortage problem in new stations should be explored. This study proposes a transfer learning-based stacked bidirectional long short term memory (TLS-BLSTM) network to predict air quality for the new stations that lack data. The proposed method integrates advanced deep learning techniques and transfer learning strategies to transfer the knowledge learned from existing air quality stations to new stations to boost forecasting. A case study in Anhui, China, was conducted to evaluate the effectiveness of TLS-BLSTM. The results show that the proposed method can help achieve 35.21% lower RMSE on average for the experimented three pollutants in new stations.

Ma Jun, Li Zheng, Cheng Jack C P, Ding Yuexiong, Lin Changqing, Xu Zherui

2020-Feb-25

Air quality prediction, Bi-directional long short-term memory, Deep learning, New stations, Spatial transfer learning

Surgery Surgery

Automated Skeletal Classification with Lateral Cephalometry Based on Artificial Intelligence.

In Journal of dental research ; h5-index 65.0

Lateral cephalometry has been widely used for skeletal classification in orthodontic diagnosis and treatment planning. However, this conventional system, requiring manual tracing of individual landmarks, contains possible errors of inter- and intravariability and is highly time-consuming. This study aims to provide an accurate and robust skeletal diagnostic system by incorporating a convolutional neural network (CNN) into a 1-step, end-to-end diagnostic system with lateral cephalograms. A multimodal CNN model was constructed on the basis of 5,890 lateral cephalograms and demographic data as an input. The model was optimized with transfer learning and data augmentation techniques. Diagnostic performance was evaluated with statistical analysis. The proposed system exhibited >90% sensitivity, specificity, and accuracy for vertical and sagittal skeletal diagnosis. Clinical performance of the vertical classification showed the highest accuracy at 96.40 (95% CI, 93.06 to 98.39; model III). The receiver operating characteristic curve and the area under the curve both demonstrated the excellent performance of the system, with a mean area under the curve >95%. The heat maps of cephalograms were also provided for deeper understanding of the quality of the learned model by visually representing the region of the cephalogram that is most informative in distinguishing skeletal classes. In addition, we present broad applicability of this system through subtasks. The proposed CNN-incorporated system showed potential for skeletal orthodontic diagnosis without the need for intermediary steps requiring complicated diagnostic procedures.

Yu H J, Cho S R, Kim M J, Kim W H, Kim J W, Choi J

2020-Jan-24

deep learning, diagnosis, diagnostic imaging, neural networks, orthodontics, orthognathic surgery

oncology Oncology

Artificial Intelligence Tool for Optimizing Eligibility Screening for Clinical Trials in a Large Community Cancer Center.

In JCO clinical cancer informatics ; h5-index 0.0

PURPOSE : Less than 5% of patients with cancer enroll in clinical trials, and 1 in 5 trials are stopped for poor accrual. We evaluated an automated clinical trial matching system that uses natural language processing to extract patient and trial characteristics from unstructured sources and machine learning to match patients to clinical trials.

PATIENTS AND METHODS : Medical records from 997 patients with breast cancer were assessed for trial eligibility at Highlands Oncology Group between May and August 2016. System and manual attribute extraction and eligibility determinations were compared using the percentage of agreement for 239 patients and 4 trials. Sensitivity and specificity of system-generated eligibility determinations were measured, and the time required for manual review and system-assisted eligibility determinations were compared.

RESULTS : Agreement between system and manual attribute extraction ranged from 64.3% to 94.0%. Agreement between system and manual eligibility determinations was 81%-96%. System eligibility determinations demonstrated specificities between 76% and 99%, with sensitivities between 91% and 95% for 3 trials and 46.7% for the 4th. Manual eligibility screening of 90 patients for 3 trials took 110 minutes; system-assisted eligibility determinations of the same patients for the same trials required 24 minutes.

CONCLUSION : In this study, the clinical trial matching system displayed a promising performance in screening patients with breast cancer for trial eligibility. System-assisted trial eligibility determinations were substantially faster than manual review, and the system reliably excluded ineligible patients for all trials and identified eligible patients for most trials.

Beck J Thaddeus, Rammage Melissa, Jackson Gretchen P, Preininger Anita M, Dankwa-Mullan Irene, Roebuck M Christopher, Torres Adam, Holtzen Helen, Coverdill Sadie E, Williamson M Paul, Chau Quincy, Rhee Kyu, Vinegra Michael

2020-Jan

General General

Boosting tree-assisted multitask deep learning for small scientific datasets.

In Journal of chemical information and modeling ; h5-index 0.0

Machine learning approaches have had tremendous success in various disciplines. However, such success highly depends on the size and quality of datasets. Scientific datasets are often small and difficult to collect. Currently, how to improve machine learning performance for small scientific datasets remains a major challenge in many academic fields, such as bioinformatics or medical science. Gradient boosting decision tree (GBDT) is typically optimal for small datasets, while deep learning often performs better for large datasets. This work reports a boosting tree-assisted multitask deep learning (BTAMDL) architecture that integrates GBDT and multitask deep learning (MDL) to achieve near-optimal predictions for small datasets when there exists a large dataset that is well correlated to the small datasets. Two BTAMDL models are constructed, one utilizing purely MDL output as GBDT input while the other admitting additional feature in GBDT input. The proposed BTAMDL models are validated on four categories of datasets, including toxicity, partition coefficient, solubility and solvation. It is found that the proposed BTAMDL models outperform the current state-of-the-art methods in various applications involving small datasets.

Jiang Jian, Wang Rui, Wang Menglun, Gao Kaifu, Nguyen Duc, Wei Guowei

2020-Jan-24

General General

Numerosity discrimination in deep neural networks: Initial competence, developmental refinement and experience statistics.

In Developmental science ; h5-index 50.0

Both humans and non-human animals exhibit sensitivity to the approximate number of items in a visual array, as indexed by their performance in numerosity discrimination tasks, and even neonates can detect changes in numerosity. These findings are often interpreted as evidence for an innate "number sense". However, recent simulation work has challenged this view by showing that human-like sensitivity to numerosity can emerge in deep neural networks that build an internal model of the sensory data. This emergentist perspective posits a central role for experience in shaping our number sense and might explain why numerical acuity progressively increases over the course of development. Here we substantiate this hypothesis by introducing a progressive unsupervised deep learning algorithm, which allows us to model the development of numerical acuity through experience. We also investigate how the statistical distribution of numerical and non-numerical features in natural environments affects the emergence of numerosity representations in the computational model. Our simulations show that deep networks can exhibit numerosity sensitivity prior to any training, as well as a progressive developmental refinement that is modulated by the statistical structure of the learning environment. To validate our simulations, we offer a refinement to the quantitative characterization of the developmental patterns observed in human children. Overall, our findings suggest that it may not be necessary to assume that animals are endowed with a dedicated system for processing numerosity, since domain-general learning mechanisms can capture key characteristics others have attributed to an evolutionarily specialized number system.

Testolin Alberto, Zou Youzhi, McClelland James L

2020-Jan-24

General General

Comparison of Mortality and Major Cardiovascular Events Among Adults With Type 2 Diabetes Using Human vs Analogue Insulins.

In JAMA network open ; h5-index 0.0

Importance : The comparative cardiovascular safety of analogue and human insulins in adults with type 2 diabetes who initiate insulin therapy in usual care settings has not been carefully evaluated using machine learning and other rigorous analytic methods.

Objective : To examine the association of analogue vs human insulin use with mortality and major cardiovascular events.

Design, Setting, and Participants : This retrospective cohort study included 127 600 adults aged 21 to 89 years with type 2 diabetes at 4 health care delivery systems who initiated insulin therapy from January 1, 2000, through December 31, 2013. Machine learning and rigorous inference methods with time-varying exposures were used to evaluate associations of continuous exposure to analogue vs human insulins with mortality and major cardiovascular events. Data were analyzed from September 1, 2017, through June 30, 2018.

Exposures : On the index date (first insulin dispensing), participants were classified as using analogue insulin with or without human insulin or human insulin only.

Main Outcomes and Measures : Overall mortality, mortality due to cardiovascular disease (CVD), myocardial infarction (MI), stroke or cerebrovascular accident (CVA), and hospitalization for congestive heart failure (CHF) were evaluated. Marginal structural modeling (MSM) with inverse probability weighting was used to compare event-free survival in separate per-protocol analyses. Adjusted and unadjusted hazard ratios and cumulative risk differences were based on logistic MSM parameterizations for counterfactual hazards. Propensity scores were estimated using a data-adaptive approach (machine learning) based on 3 nested covariate adjustment sets. Sensitivity analyses were conducted to address potential residual confounding from unmeasured differences in risk factors across delivery systems.

Results : The 127 600 participants (mean [SD] age, 59.4 [12.6] years; 68 588 men [53.8%]; mean [SD] body mass index, 32.3 [7.1]) had a median follow-up of 4 quarters (interquartile range, 3-9 quarters) and experienced 5464 deaths overall (4.3%), 1729 MIs (1.4%), 1301 CVAs (1.0%), and 3082 CHF hospitalizations (2.4%). There were no differences in adjusted hazard ratios for continuous analogue vs human insulin exposure during 10 quarters for overall mortality (1.15; 95% CI, 0.97-1.34), CVD mortality (1.26; 95% CI, 0.86-1.66), MI (1.11; 95% CI, 0.77-1.45), CVA (1.30; 95% CI, 0.81-1.78), or CHF hospitalization (0.93; 95% CI, 0.75-1.11).

Conclusions and Relevance : Insulin-naive adults with type 2 diabetes who initiate and continue treatment with human vs analogue insulins had similar observed rates of major cardiovascular events, CVD mortality, and overall mortality.

Neugebauer Romain, Schroeder Emily B, Reynolds Kristi, Schmittdiel Julie A, Loes Linda, Dyer Wendy, Desai Jay R, Vazquez-Benitez Gabriela, Ho P Michael, Anderson Jeff P, Pimentel Noel, O’Connor Patrick J

2020-Jan-03

General General

Depression screening using mobile phone usage metadata: a machine learning approach.

In Journal of the American Medical Informatics Association : JAMIA ; h5-index 0.0

OBJECTIVE : Depression is currently the second most significant contributor to non-fatal disease burdens globally. While it is treatable, depression remains undiagnosed in many cases. As mobile phones have now become an integral part of daily life, this study examines the possibility of screening for depressive symptoms continuously based on patients' mobile usage patterns.

MATERIALS AND METHODS : 412 research participants reported a range of their mobile usage statistics. Beck Depression Inventory-2nd ed (BDI-II) was used to measure the severity of depression among participants. A wide array of machine learning classification algorithms was trained to detect participants with depression symptoms (ie, BDI-II score ≥ 14). The relative importance of individual variables was additionally quantified.

RESULTS : Participants with depression were found to have fewer saved contacts on their devices, spend more time on their mobile devices to make and receive fewer and shorter calls, and send more text messages than participants without depression. The best model was a random forest classifier, which had an out-of-sample balanced accuracy of 0.768. The balanced accuracy increased to 0.811 when participants' age and gender were included.

DISCUSSIONS/CONCLUSION : The significant predictive power of mobile usage attributes implies that, by collecting mobile usage statistics, mental health mobile applications can continuously screen for depressive symptoms for initial diagnosis or for monitoring the progress of ongoing treatments. Moreover, the input variables used in this study were aggregated mobile usage metadata attributes, which has low privacy sensitivity making it more likely for patients to grant required application permissions.

Razavi Rouzbeh, Gharipour Amin, Gharipour Mojgan

2020-Jan-24

depression, machine learning, mobile health, mobile usage

General General

Portable Detection of Apnea and Hypopnea Events using Bio-Impedance of the Chest and Deep Learning.

In IEEE journal of biomedical and health informatics ; h5-index 0.0

Sleep apnea is one of the most common sleep-related breathing disorders. It is diagnosed through an overnight sleep study in a specialized sleep clinic. This setup is expensive and the number of beds and staff are limited, leading to a long waiting time. To enable more patients to be tested, and repeated monitoring for diagnosed patients, portable sleep monitoring devices are being developed. These devices automatically detect sleep apnea events in one or more respiration-related signals. There are multiple methods to measure respiration, with varying levels of signal quality and comfort for the patient. In this study, the potential of using the bio-impedance (bioZ) of the chest as a respiratory surrogate is analyzed. A novel portable device is presented, combined with a two-phase Long Short-Term Memory (LSTM) deep learning algorithm for automated event detection. The setup is benchmarked using simultaneous recordings of the device and the traditional polysomnography in 25 patients. The results demonstrate that using only the bioZ, an area under the precision-recall curve of 46.9% can be achieved, which is on par with automatic scoring using a polysomnography respiration channel. The sensitivity, specificity and accuracy are 58.4%, 76.2% and 72.8% respectively. This confirms the potential of using the bioZ device and deep learning algorithm for automatically detecting sleep respiration events during the night, in a portable and comfortable setup.

Van Steenkiste Tom, Groenendaal Willemijn, Dreesen Pauline, Lee Seulki, Klerkx Susie, De Francisco Ruben, Deschrijver Dirk, Dhaene Tom

2020-Jan-20

General General

A Deep Neural Network Application for Improved Prediction of HbA1c in Type 1 Diabetes.

In IEEE journal of biomedical and health informatics ; h5-index 0.0

HbA1c is a primary marker of long-term average blood glucose, which is an essential measure of successful control in type 1 diabetes. Previous studies have shown that HbA1c estimates can be obtained from 5- 12 weeks of daily blood glucose measurements. However, these methods suffer from accuracy limitations when applied to incomplete data with missing periods of measurements. The aim of this work is to overcome these limitations improving the accuracy and robustness of HbA1c prediction from time series of blood glucose. A novel data-driven HbA1c prediction model based on deep learning and convolutional neural networks is presented. The model focuses on the extraction of behavioral patterns from sequences of self-monitored blood glucose readings on various temporal scales. Assuming that subjects who share behavioral patterns have also similar capabilities for diabetes control and resulting HbA1c, it becomes possible to infer the HbA1c of subjects with incomplete data from multiple observations of similar behaviors. Trained and validated on a dataset, containing 1543 real world observation epochs from 759 subjects, the model has achieved the mean absolute error of 4.80±0.62 mmol/mol, median absolute error of 3.81 ±0.58 mmol/mol and [Formula: see text] of 0.71 ±0.09 on average during the 10 fold cross validation. Automatic behavioral characterization via extraction of sequential features by the proposed convolutional neural network structure has significantly improved the accuracy of HbA1c prediction compared to the existing methods.

Zaitcev Aleksandr, Eissa Mohammad R, Hui Zheng, Good Tim, Elliott Jackie, Benaissa Mohammed

2020-Jan-17

General General

DACH: Domain Adaptation Without Domain Information.

In IEEE transactions on neural networks and learning systems ; h5-index 0.0

Domain adaptation is becoming increasingly important for learning systems in recent years, especially with the growing diversification of data domains in real-world applications, such as the genetic data from various sequencing platforms and video feeds from multiple surveillance cameras. Traditional domain adaptation approaches target to design transformations for each individual domain so that the twisted data from different domains follow an almost identical distribution. In many applications, however, the data from diversified domains are simply dumped to an archive even without clear domain labels. In this article, we discuss the possibility of learning domain adaptations even when the data does not contain domain labels. Our solution is based on our new model, named domain adaption using cross-domain homomorphism (DACH in short), to identify intrinsic homomorphism hidden in mixed data from all domains. DACH is generally compatible with existing deep learning frameworks, enabling the generation of nonlinear features from the original data domains. Our theoretical analysis not only shows the universality of the homomorphism, but also proves the convergence of DACH for significant homomorphism structures over the data domains is preserved. Empirical studies on real-world data sets validate the effectiveness of DACH on merging multiple data domains for joint machine learning tasks and the scalability of our algorithm to domain dimensionality.

Cai Ruichu, Li Jiahao, Zhang Zhenjie, Yang Xiaoyan, Hao Zhifeng

2020-Jan-20

General General

A Multimodal Saliency Model for Videos with High Audio-Visual Correspondence.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society ; h5-index 0.0

Audio information has been bypassed by most of current visual attention prediction studies. However, sound could have influence on visual attention and such influence has been widely investigated and proofed by many psychological studies. In this paper, we propose a novel multi-modal saliency (MMS) model for videos containing scenes with high audio-visual correspondence. In such scenes, humans tend to be attracted by the sound sources and it is also possible to localize the sound sources via cross-modal analysis. Specifically, we first detect the spatial and temporal saliency maps from the visual modality by using a novel free energy principle. Then we propose to detect the audio saliency map from both audio and visual modalities by localizing the moving-sounding objects using cross-modal kernel canonical correlation analysis, which is first of its kind in the literature. Finally we propose a new two-stage adaptive audiovisual saliency fusion method to integrate the spatial, temporal and audio saliency maps to our audio-visual saliency map. The proposed MMS model has captured the influence of audio, which is not considered in the latest deep learning based saliency models. To take advantages of both deep saliency modeling and audio-visual saliency modeling, we propose to combine deep saliency models and the MMS model via a later fusion, and we find that an average of 5% performance gain is obtained. Experimental results on audio-visual attention databases show that the introduced models incorporating audio cues have significant superiority over state-of-the-art image and video saliency models which utilize a single visual modality.

Min Xiongkuo, Zhai Guangtao, Zhou Jiantao, Zhang Xiao-Ping, Yang Xiaokang, Guan Xinping

2020-Jan-17

General General

Predicting fish kills and toxic blooms in an intensive mariculture site in the Philippines using a machine learning model.

In The Science of the total environment ; h5-index 0.0

Harmful algal blooms (HABs) that produce toxins and those that lead to fish kills are global problems that appear to be increasing in frequency and expanding in area. One way to help mitigate their impacts on people's health and livelihoods is to develop early-warning systems. Models to predict and manage HABs typically make use of complex multi-model structures incorporating satellite imagery and frequent monitoring data with different levels of detail into hydrodynamic models. These relatively more sophisticated methods are not necessarily applicable in countries like the Philippines. Empirical statistical models can be simpler alternatives that have also been successful for HAB forecasting of toxic blooms. Here, we present the use of the random forest, a machine learning algorithm, to develop an early-warning system for the prediction of two different types of HABs: fish kill and toxic bloom occurrences in Bolinao-Anda, Philippines, using data that can be obtained from in situ sensors. This site features intensive and extensive mariculture activities, as well as a long history of HABs. Data on temperature, salinity, dissolved oxygen, pH and chlorophyll from 2015 to 2017 were analyzed together with shellfish ban and fish kill occurrences. The random forest algorithm performed well: the fish kill and toxic bloom models were 96.1% and 97.8% accurate in predicting fish kill and shellfish ban occurrences, respectively. For both models, the most important predictive variable was a decrease in dissolved oxygen. Fish kills were more likely during higher salinity and temperature levels, whereas the toxic blooms occurred more at relatively lower salinity and higher chlorophyll conditions. This demonstrates a step towards integrating information from data that can be obtained through real-time sensors into a an early-warning system for two different types of HABs. Further testing of these models through times and different areas are recommended.

Yñiguez Aletta T, Ottong Zheina J

2020-Mar-10

Alexandrium, Fish kill, Harmful algal blooms, Random forest algorithm, Shellfish, Toxic blooms

General General

Unsupervised Deep Image Fusion with Structure Tensor Representations.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society ; h5-index 0.0

Convolutional neural networks (CNNs) have facilitated substantial progress on various problems in computer vision and image processing. However, applying them to image fusion has remained challenging due to the lack of the labelled data for supervised learning. This paper introduces a deep image fusion network (DIF-Net), an unsupervised deep learning framework for image fusion. The DIF-Net parameterizes the entire processes of image fusion, comprising of feature extraction, feature fusion, and image reconstruction, using a CNN. The purpose of DIF-Net is to generate an output image which has an identical contrast to high-dimensional input images. To realize this, we propose an unsupervised loss function using the structure tensor representation of the multi-channel image contrasts. Different from traditional fusion methods that involve time-consuming optimization or iterative procedures to obtain the results, our loss function is minimized by a stochastic deep learning solver with large-scale examples. Consequently, the proposed method can produce fused images that preserve source image details through a single forward network trained without reference ground-truth labels. The proposed method has broad applicability to various image fusion problems, including multi-spectral, multi-focus, and multi-exposure image fusions. Quantitative and qualitative evaluations show that the proposed technique outperforms existing state-of-the-art approaches for various applications.

Jung Hyungjoo, Kim Youngjung, Jang Hyunsung, Ha Namkoo, Sohn Kwanghoon

2020-Jan-17

Internal Medicine Internal Medicine

Colored Video Analysis in Wireless Capsule Endoscopy: A Survey of State-of-the-Art.

In Current medical imaging reviews ; h5-index 0.0

Wireless Capsule Endoscopy (WCE) is a highly promising technology for gastrointestinal (GI) tract abnormality diagnosis. However, low image resolution and low frame rates are challenging issues in WCE. In addition, the relevant frames containing the features of interest for accurate diagnosis only constitute 1% of the complete video information. For these reasons, analyzing the WCE videos is still a time consuming and laborious examination for the gastroenterologists, which reduces WCE system usability. This leads to the emergent need to speed-up and automates the WCE video process for GI tract examinations. Consequently, the present work introduced the concept of WCE technology, including the structure of WCE systems, with a focus on the medical endoscopy video capturing process using image sensors. It discussed also the significant characteristics of the different GI tract for effective feature extraction. Furthermore, video approaches for bleeding and lesion detection in the WCE video were reported with computer-aided diagnosis systems in different applications to support the gastroenterologist in the WCE video analysis. In image enhancement, WCE video review time reduction is also discussed, while reporting the challenges and future perspectives, including the new trend to employ the deep learning models for feature Learning, polyp recognition, and classification, as a new opportunity for researchers to develop future WCE video analysis techniques.

Ashour Amira S, Dey Nilanjan, Mohamed Waleed S, Tromp Jolanda G, Sherratt R Simon, Shi Fuqian, Moraru Luminița

2020-Jan-24

Bleeding detection, Computer- aided diagnosis, Endoscopy capsule, Reviewing time reduction, Video analysis, Wireless video gastrointestinal (GI) endoscopy capsule

General General

Using Behavioral Analytics to Predict Customer Invoice Payment.

In Big data ; h5-index 0.0

Experiences from various industries show that companies may have problems collecting customer invoice payments. Studies report that almost half of the small- and medium-sized enterprise and business-to-business invoices in the United States and United Kingdom are paid late. In this study, our aim is to understand customer behavior regarding invoice payments, and propose an analytical approach to learning and predicting payment behavior. Our logic can then be embedded into a decision support system where decision makers can make predictions regarding future payments, and take actions as necessary toward the collection of potentially unpaid debt, or adjust their financial plans based on the expected invoice-to-cash amount. In our analysis, we utilize a large data set with more than 1.6 million customers and their invoice and payment history, as well as various actions (e.g., e-mail, short message service, phone call) performed by the invoice-issuing company toward customers to encourage payment. We use supervised and unsupervised learning techniques to help predict whether a customer will pay the invoice or outstanding balance by the next due date based on the actions generated by the company and the customer's response. We propose a novel behavioral scoring model used as an input variable to our predictive models. Among the three machine learning approaches tested, we report the results of logistic regression that provides up to 97% accuracy with or without preclustering of customers. Such a model has a high potential to help decision makers in generating actions that contribute to the financial stability of the company in terms of cash flow management and avoiding unnecessary corporate lines of credit.

Bahrami Mohsen, Bozkaya Burcin, Balcisoy Selim

2020-Jan-23

behavioral analytics, invoice collection, invoice to cash, logistic regression, machine learning, predictive analytics

Radiology Radiology

Role of MRI in Staging of Penile Cancer.

In Journal of magnetic resonance imaging : JMRI ; h5-index 0.0

Penile cancer is one of the male-specific cancers. Accurate pretreatment staging is crucial due to a plethora of treatment options currently available. The 8th edition American Joint Committee on Cancer-Tumor Node and Metastasis (AJCC-TNM) revised the staging for penile cancers, with invasion of corpora cavernosa upstaged from T2 to T3 and invasion of urethra downstaged from T3 to being not separately relevant. With this revision, MRI is more relevant in local staging because MRI is accurate in identifying invasion of corpora cavernosa, while the accuracy is lower for detection of urethral involvement. The recent European Urology Association (EAU) guidelines recommend MRI to exclude invasion of the corpora cavernosa, especially if penis preservation is planned. Identification of satellite lesions and measurement of residual-penile-length help in surgical planning. When nonsurgical treatment modalities of the primary tumor are being considered, accurate local staging helps in decision-making regarding upfront inguinal lymph node dissection as against surveillance. MRI helps in detection and extent of inguinal and pelvic lymphadenopathy and is superior to clinical palpation, which continues to be the current approach recommended by National Comprehensive Cancer Network (NCCN) treatment guidelines. MRI helps the detection of "bulky" lymph nodes that warrant neoadjuvant chemotherapy and potentially identify extranodal extension. However, tumor involvement in small lymph nodes and differentiation of reactive vs. malignant lymphadenopathy in large lymph nodes continue to be challenging and the utilization of alternative contrast agents (superparamagnetic iron oxide), positron emission tomography (PET)-MRI along with texture analysis is promising. In locally recurrent tumors, MRI is invaluable in identification of deep invasion, which forms the basis of treatment. Multiparametric MRI, especially diffusion-weighted-imaging, may allow for quantitative noninvasive assessment of tumor grade and histologic subtyping to avoid biopsy undersampling. Further research is required for incorporation of MRI with deep learning and artificial intelligence algorithms for effective staging in penile cancer. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 3.

Krishna Satheesh, Shanbhogue Krishna, Schieda Nicola, Morbeck Fernando, Hadas Benhabib, Kulkarni Girish, McInnes Matthew D, Baroni Ronaldo Hueb

2020-Jan-24

carcinoma, corpora cavernosa, penile cancer, penis, staging

General General

Captioning Ultrasound Images Automatically.

In Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention ; h5-index 0.0

We describe an automatic natural language processing (NLP)-based image captioning method to describe fetal ultrasound video content by modelling the vocabulary commonly used by sonographers and sonologists. The generated captions are similar to the words spoken by a sonographer when describing the scan experience in terms of visual content and performed scanning actions. Using full-length second-trimester fetal ultrasound videos and text derived from accompanying expert voice-over audio recordings, we train deep learning models consisting of convolutional neural networks and recurrent neural networks in merged configurations to generate captions for ultrasound video frames. We evaluate different model architectures using established general metrics (BLEU, ROUGE-L) and application-specific metrics. Results show that the proposed models can learn joint representations of image and text to generate relevant and descriptive captions for anatomies, such as the spine, the abdomen, the heart, and the head, in clinical fetal ultrasound scans.

Alsharid Mohammad, Sharma Harshita, Drukker Lior, Chatelain Pierre, Papageorghiou Aris T, Noble J Alison

2019-Oct

Deep Learning, Fetal Ultrasound, Image Captioning, Image Description, Natural Language Processing, Recurrent Neural Networks

General General

Decoding crystallography from high-resolution electron imaging and diffraction datasets with deep learning.

In Science advances ; h5-index 0.0

While machine learning has been making enormous strides in many technical areas, it is still massively underused in transmission electron microscopy. To address this, a convolutional neural network model was developed for reliable classification of crystal structures from small numbers of electron images and diffraction patterns with no preferred orientation. Diffraction data containing 571,340 individual crystals divided among seven families, 32 genera, and 230 space groups were used to train the network. Despite the highly imbalanced dataset, the network narrows down the space groups to the top two with over 70% confidence in the worst case and up to 95% in the common cases. As examples, we benchmarked against alloys to two-dimensional materials to cross-validate our deep-learning model against high-resolution transmission electron images and diffraction patterns. We present this result both as a research tool and deep-learning application for diffraction analysis.

Aguiar J A, Gong M L, Unocic R R, Tasdizen T, Miller B D

2019-Oct

General General

A systematic review of the application of machine learning in the detection and classification of transposable elements.

In PeerJ ; h5-index 0.0

Background : Transposable elements (TEs) constitute the most common repeated sequences in eukaryotic genomes. Recent studies demonstrated their deep impact on species diversity, adaptation to the environment and diseases. Although there are many conventional bioinformatics algorithms for detecting and classifying TEs, none have achieved reliable results on different types of TEs. Machine learning (ML) techniques can automatically extract hidden patterns and novel information from labeled or non-labeled data and have been applied to solving several scientific problems.

Methodology : We followed the Systematic Literature Review (SLR) process, applying the six stages of the review protocol from it, but added a previous stage, which aims to detect the need for a review. Then search equations were formulated and executed in several literature databases. Relevant publications were scanned and used to extract evidence to answer research questions.

Results : Several ML approaches have already been tested on other bioinformatics problems with promising results, yet there are few algorithms and architectures available in literature focused specifically on TEs, despite representing the majority of the nuclear DNA of many organisms. Only 35 articles were found and categorized as relevant in TE or related fields.

Conclusions : ML is a powerful tool that can be used to address many problems. Although ML techniques have been used widely in other biological tasks, their utilization in TE analyses is still limited. Following the SLR, it was possible to notice that the use of ML for TE analyses (detection and classification) is an open problem, and this new field of research is growing in interest.

Orozco-Arias Simon, Isaza Gustavo, Guyot Romain, Tabares-Soto Reinel

2019

Bioinformatics, Classification, Deep learning, Detection, Machine learning, Retrotransposons, Transposable elements

General General

Evaluating probabilistic programming and fast variational Bayesian inference in phylogenetics.

In PeerJ ; h5-index 0.0

Recent advances in statistical machine learning techniques have led to the creation of probabilistic programming frameworks. These frameworks enable probabilistic models to be rapidly prototyped and fit to data using scalable approximation methods such as variational inference. In this work, we explore the use of the Stan language for probabilistic programming in application to phylogenetic models. We show that many commonly used phylogenetic models including the general time reversible substitution model, rate heterogeneity among sites, and a range of coalescent models can be implemented using a probabilistic programming language. The posterior probability distributions obtained via the black box variational inference engine in Stan were compared to those obtained with reference implementations of Markov chain Monte Carlo (MCMC) for phylogenetic inference. We find that black box variational inference in Stan is less accurate than MCMC methods for phylogenetic models, but requires far less compute time. Finally, we evaluate a custom implementation of mean-field variational inference on the Jukes-Cantor substitution model and show that a specialized implementation of variational inference can be two orders of magnitude faster and more accurate than a general purpose probabilistic implementation.

Fourment Mathieu, Darling Aaron E

2019

Bayesian inference, Phylogenetics, Stan, Variational Bayes, molecular clock

General General

A Hybrid Approach for Sub-Acute Ischemic Stroke Lesion Segmentation Using Random Decision Forest and Gravitational Search Algorithm.

In Current medical imaging reviews ; h5-index 0.0

BACKGROUND : The sub-acute ischemic stroke is the most basic illnesses reason for death on the planet. We evaluate the impact of segmentation technique during the time of breaking down the capacities of the cerebrum.

OBJECTIVE : The main objective of this paper is to segment the ischemic stroke lesions in Magnetic Resonance (MR) images in the presence of other pathologies like neurological disorder, encephalopathy, brain damage, Multiple sclerosis (MS).

METHODS : In this paper, we utilize a hybrid way to deal with segment the ischemic stroke from alternate pathologies in magnetic resonance (MR) images utilizing Random Decision Forest (RDF) and Gravitational Search Algorithm (GSA). The RDF approach is an effective machine learning approach.

RESULTS : The RDF strategy joins two parameters; they are; the number of trees in the forest and the number of leaves per tree; it runs quickly and proficiently when dealing with vast data. The GSA algorithm is utilized to optimize the RDF data for choosing the best number of trees and the number of leaves per tree in the forest.

CONCLUSION : This paper provides a new hybrid GSA-RDF classifier technique to segment the ischemic stroke lesions in MR images. The experimental results demonstrate that the proposed technique has the Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and Mean Bias Error (MBE) ranges are 16.5485 %, 7.2654 %, and 2.4585 %individually. The proposed RDF-GSA algorithm has better precision and execution when compared with the existing ischemic stroke segmentation method.

Melingi Sunil Babu, Vijayalakshmi V

2019

MR images, Sub acute ischemic stroke, bagger algorithm, cerebrum, hybrid GSA -RDF algorithm, stroke segmentation

General General

Prediction of transient tumor enlargement using MRI tumor texture after radiosurgery on vestibular schwannoma.

In Medical physics ; h5-index 59.0

PURPOSE : Vestibular schwannomas (VSs) are uncommon benign brain tumors, generally treated using Gamma Knife radiosurgery (GKRS). However, due to the possible adverse effect of transient tumor enlargement (TTE), large VS tumors are often surgically removed instead of treated radiosurgically. Since microsurgery is highly invasive and results in a significant increased risk of complications, GKRS is generally preferred. Therefore, prediction of TTE for large VS tumors can improve overall VS treatment and enable physicians to select the most optimal treatment strategy on an individual basis. Currently, there are no clinical factors known to be predictive for TTE. In this research, we aim at predicting TTE following GKRS using texture features extracted from MRI scans.

METHODS : We analyzed clinical data of patients with VSs treated at our Gamma Knife center. The data was collected prospectively and included patient- and treatment-related characteristics and MRI scans obtained at day of treatment and at follow-up visits, 6, 12, 24 and 36 months after treatment. The correlations of the patient- and treatment-related characteristics to TTE were investigated using statistical tests. From the treatment scans, we extracted the following MRI image features: first-order statistics, Minkowski functionals, and three-dimensional gray-level co-occurrence matrices (GLCMs). These features were applied in a machine learning environment for classification of TTE, using support vector machines.

RESULTS : In a clinical data set, containing 61 patients presenting obvious non-TTE and 38 patients presenting obvious TTE, we determined that patient- and treatment-related characteristics do not show any correlation to TTE. Furthermore, first-order statistical MRI features and Minkowski functionals did not significantly show prognostic values using support vector machine classification. However, utilizing a set of 4 GLCM features, we achieved a sensitivity of 0.82 and a specificity of 0.69, showing their prognostic value of TTE. Moreover, these results increased for larger tumor volumes obtaining a sensitivity of 0.77 and a specificity of 0.89 for tumors larger than 6 cm3 .

CONCLUSIONS : The results found in this research clearly show that MRI tumor texture provides information that can be employed for predicting TTE. This can form a basis for individual VS treatment selection, further improving overall treatment results. Particularly in patients with large VSs, where the phenomenon of TTE is most relevant and our predictive model performs best, these findings can be implemented in a clinical workflow such that for each patient, the most optimal treatment strategy can be determined.

Langenhuizen Patrick P J H, Sebregts Sander H P, Zinger Svetlana, Leenstra Sieger, Verheul Jeroen B, de With Peter H N

2020-Jan-23

Gamma Knife radiosurgery, MRI tumor texture, pseudoprogression, transient tumor enlargement, vestibular schwannomas

General General

Machine learning-based detection of soil salinity in an arid desert region, Northwest China: A comparison between Landsat-8 OLI and Sentinel-2 MSI.

In The Science of the total environment ; h5-index 0.0

Accurate assessment of soil salinization is considered as one of the most important steps in combating global climate change, especially in arid and semi-arid regions. Multi-spectral remote sensing (RS) data including Landsat series provides the potential for frequent surveys for soil salinization at various scales and resolutions. Additionally, the recently launched Sentinel-2 satellite constellation has temporal revisiting frequency of 5 days, which has been proven to be an ideal approach to assess soil salinity. Yet, studies on detailed comparison in soil salinity tracking between Landsat-8 OLI and Sentinel-2 MSI remain limited. For this purpose, we collected a total of 64 topsoil samples in an arid desert region, the Ebinur Lake Wetland National Nature Reserve (ELWNNR) to compare the monitoring accuracy between Landsat-8 OLI and Sentinel-2 MSI. In this study, the Cubist model was trained using RS-derived covariates (spectral bands, Tasseled Cap transformation-derived wetness (TCW), and satellite salinity indices) and laboratory measured electrical conductivity of 1:5 soil:water extract (EC). The results showed that the measured soil salinity had a significant correlation with surface soil moisture (Pearson's r = 0.75). The introduction of TCW generated satisfactory estimating performance. Compared with OLI dataset, the combination of MSI dataset and Cubist model yielded overall better model performance and accuracy measures (R2 = 0.912, RMSE = 6.462 dS m-1, NRMSE = 9.226%, RPD = 3.400 and RPIQ = 6.824, respectively). The differences between Landsat-8 OLI and Sentinel-2 MSI were distinguishable. In conclusion, MSI image with finer spatial resolution performed better than OLI. Combining RS data sets and their derived TCW within a Cubist framework yielded accurate regional salinity map. The increased temporal revisiting frequency and spectral resolution of MSI data are expected to be positive enhancements to the acquisition of high-quality soil salinity information of desert soils.

Wang Jingzhe, Ding Jianli, Yu Danlin, Teng Dexiong, He Bin, Chen Xiangyue, Ge Xiangyu, Zhang Zipeng, Wang Yi, Yang Xiaodong, Shi Tiezhu, Su Fenzhen

2020-Mar-10

Cubist, Landsat-8 OLI, Remote sensing, Sentinel-2 MSI, Soil salinization, Surface soil moisture

Radiology Radiology

Feasibility of a sub-3-minute imaging strategy for ungated quiescent interval slice-selective MRA of the extracranial carotid arteries using radial k-space sampling and deep learning-based image processing.

In Magnetic resonance in medicine ; h5-index 66.0

PURPOSE : To develop and test the feasibility of a sub-3-minute imaging strategy for non-contrast evaluation of the extracranial carotid arteries using ungated quiescent interval slice-selective (QISS) MRA, combining single-shot radial sampling with deep neural network-based image processing to optimize image quality.

METHODS : The extracranial carotid arteries of 12 human subjects were imaged at 3 T using ungated QISS MRA. In 7 healthy volunteers, the effects of radial and Cartesian k-space sampling, single-shot and multishot image acquisition (1.1-3.3 seconds/slice, 141-423 seconds/volume), and deep learning-based image processing were evaluated using segmental image quality scoring, arterial temporal SNR, arterial-to-background contrast and apparent contrast-to-noise ratio, and structural similarity index. Comparison of deep learning-based image processing was made with block matching and 3D filtering denoising.

RESULTS : Compared with Cartesian sampling, radial k-space sampling increased arterial temporal SNR 107% (P < .001) and improved image quality during 1-shot imaging (P < .05). The carotid arteries were depicted with similar image quality on the rapid 1-shot and much lengthier 3-shot radial QISS protocols (P = not significant), which was corroborated in patient studies. Deep learning-based image processing outperformed block matching and 3D filtering denoising in terms of structural similarity index (P < .001). Compared with original QISS source images, deep learning image processing provided 24% and 195% increases in arterial-to-background contrast (P < .001) and apparent contrast-to-noise ratio (P < .001), and provided source images that were preferred by radiologists (P < .001).

CONCLUSION : Rapid, sub-3-minute evaluation of the extracranial carotid arteries is feasible with ungated single-shot radial QISS, and benefits from the use of deep learning-based image processing to enhance source image quality.

Koktzoglou Ioannis, Huang Rong, Ong Archie L, Aouad Pascale J, Aherne Emily A, Edelman Robert R

2020-Jan-23

MRA, QISS, carotid, deep learning, radial

General General

Detecting Positive Selection in Populations Using Genetic Data.

In Methods in molecular biology (Clifton, N.J.) ; h5-index 0.0

High-throughput genomic sequencing allows to disentangle the evolutionary forces acting in populations. Among evolutionary forces, positive selection has received a lot of attention because it is related to the adaptation of populations in their environments, both biotic and abiotic. Positive selection, also known as Darwinian selection, occurs when an allele is favored by natural selection. The frequency of the favored allele increases in the population and, due to genetic hitchhiking, neighboring linked variation diminishes, creating so-called selective sweeps. Such a process leaves traces in genomes that can be detected in a future time point. Detecting traces of positive selection in genomes is achieved by searching for signatures introduced by selective sweeps, such as regions of reduced variation, a specific shift of the site frequency spectrum, and particular linkage disequilibrium (LD) patterns in the region. A variety of approaches can be used for detecting selective sweeps, ranging from simple implementations that compute summary statistics to more advanced statistical approaches, e.g., Bayesian approaches, maximum-likelihood-based methods, and machine learning methods. In this chapter, we discuss selective sweep detection methodologies on the basis of their capacity to analyze whole genomes or just subgenomic regions, and on the specific polymorphism patterns they exploit as selective sweep signatures. We also summarize the results of comparisons among five open-source software releases (SweeD, SweepFinder, SweepFinder2, OmegaPlus, and RAiSD) regarding sensitivity, specificity, and execution times. Furthermore, we test and discuss machine learning methods and present a thorough performance analysis. In equilibrium neutral models or mild bottlenecks, most methods are able to detect selective sweeps accurately. Methods and tools that rely on linkage disequilibrium (LD) rather than single SNPs exhibit higher true positive rates than the site frequency spectrum (SFS)-based methods under the model of a single sweep or recurrent hitchhiking. However, their false positive rate is elevated when a misspecified demographic model is used to build the distribution of the statistic under the null hypothesis. Both LD and SFS-based approaches suffer from decreased accuracy on localizing the true target of selection in bottleneck scenarios. Furthermore, we present an extensive analysis of the effects of gene flow on selective sweep detection, a problem that has been understudied in selective sweep literature.

Koropoulis Angelos, Alachiotis Nikolaos, Pavlidis Pavlos

2020

Machine learning, Positive selection, Selective sweep, Software tools, Summary statistics

Public Health Public Health

[Digital epidemiology].

In Bundesgesundheitsblatt, Gesundheitsforschung, Gesundheitsschutz ; h5-index 0.0

Digital epidemiology is a new and rapidly growing field. The technological revolution we have been witnessing during the last decade, the global rise of the Internet, the emergence of social media and social networks that connect individuals worldwide for information exchange and social interactions, and the almost complete social penetration of mobile devices such as smartphones provide access to data on individual behavior with unprecedented resolution and precision. In digital epidemiology, this type of high-resolution behavioral data is analyzed to advance our understanding of, for example, infectious disease dynamics and improve our abilities to forecast epidemic outbreaks and related phenomena.This article provides an overview on the topic. Different aspects of digital epidemiology are alluded to. Based on examples, I will explain how epidemiological data is integrated on new comprehensive and interactive websites, how the analysis of interactions and activities on social media platforms can yield answers to epidemiological questions, and finally how individual-based data collected by smartphones or wearable sensors in natural experiments can be used to reconstruct contact and physical proximity networks the knowledge of which substantially improves the predictive power of computational models for transmissible infectious diseases.The challenges posed in terms of privacy protection and data security will be discussed. Concepts and solutions will be explained that may help to improve public health by leveraging the new data while at the same time protecting the individual's data sovereignty and personal dignity.

Brockmann Dirk

2020-Jan-23

Artificial intelligence, Big data, Complex networks, Computational epidemiology, Machine learning

Public Health Public Health

Real-time Malaria Parasite Screening in Thick Blood Smears for Low-Resource Setting.

In Journal of digital imaging ; h5-index 0.0

Malaria is a serious public health problem in many parts of the world. Early diagnosis and prompt effective treatment are required to avoid anemia, organ failure, and malaria-associated deaths. Microscopic analysis of blood samples is the preferred method for diagnosis. However, manual microscopic examination is very laborious and requires skilled health personnel of which there is a critical shortage in the developing world such as in sub-Saharan Africa. Critical shortages of trained health personnel and the inability to cope with the workload to examine malaria slides are among the main limitations of malaria microscopy especially in low-resource and high disease burden areas. We present a low-cost alternative and complementary solution for rapid malaria screening for low resource settings to potentially reduce the dependence on manual microscopic examination. We develop an image processing pipeline using a modified YOLOv3 detection algorithm to run in real time on low-cost devices. We test the performance of our solution on two datasets. In the dataset collected using a microscope camera, our model achieved 99.07% accuracy and 97.46% accuracy on the dataset collected using a mobile phone camera. While the mean average precision of our model is on par with human experts at an object level, we are several orders of magnitude faster than human experts as we can detect parasites in images as well as videos in real time.

Chibuta Samson, Acar Aybar C

2020-Jan-23

Deep learning, Low-cost, Malaria parasites, Microscopy, Object detection

Radiology Radiology

Dynamic changes of views on the brain changes of Cushing's syndrome using different computer-assisted tool.

In Reviews in endocrine & metabolic disorders ; h5-index 0.0

Cushing's syndrome (CS) provides a unique model for assessing the neurotoxic effect of chronic hypercortisolism on human brains. With the ongoing development of different computer-assisted tools, four research stages emerged, each with its own pearls and pitfalls. This review summarizes current knowledge and describes the dynamic changes of views on the brain changes of CS, especially in the current era of the rapid development of artificial intelligence and big data. The adverse effects of GC on brain are proven to be on structural, functional and cellular levels at the same time.

Gao Lu, Liu Lu, Shi Lin, Luo Yishan, Wang Zihao, Guo Xiaopeng, Xing Bing

2020-Jan-23

Artificial intelligence, Brain imaging, Cushing’s disease, Cushing’s syndrome, Transsphenoidal surgery

General General

Uncovering tissue-specific binding features from differential deep learning.

In Nucleic acids research ; h5-index 217.0

Transcription factors (TFs) can bind DNA in a cooperative manner, enabling a mutual increase in occupancy. Through this type of interaction, alternative binding sites can be preferentially bound in different tissues to regulate tissue-specific expression programmes. Recently, deep learning models have become state-of-the-art in various pattern analysis tasks, including applications in the field of genomics. We therefore investigate the application of convolutional neural network (CNN) models to the discovery of sequence features determining cooperative and differential TF binding across tissues. We analyse ChIP-seq data from MEIS, TFs which are broadly expressed across mouse branchial arches, and HOXA2, which is expressed in the second and more posterior branchial arches. By developing models predictive of MEIS differential binding in all three tissues, we are able to accurately predict HOXA2 co-binding sites. We evaluate transfer-like and multitask approaches to regularizing the high-dimensional classification task with a larger regression dataset, allowing for the creation of deeper and more accurate models. We test the performance of perturbation and gradient-based attribution methods in identifying the HOXA2 sites from differential MEIS data. Our results show that deep regularized models significantly outperform shallow CNNs as well as k-mer methods in the discovery of tissue-specific sites bound in vivo.

Phuycharoen Mike, Zarrineh Peyman, Bridoux Laure, Amin Shilu, Losa Marta, Chen Ke, Bobola Nicoletta, Rattray Magnus

2020-Jan-24

General General

Likelihood contrasts: a machine learning algorithm for binary classification of longitudinal data.

In Scientific reports ; h5-index 158.0

Machine learning methods have gained increased popularity in biomedical research during the recent years. However, very few of them support the analysis of longitudinal data, where several samples are collected from an individual over time. Additionally, most of the available longitudinal machine learning methods assume that the measurements are aligned in time, which is often not the case in real data. Here, we introduce a robust longitudinal machine learning method, named likelihood contrasts (LC), which supports study designs with unaligned time points. Our LC method is a binary classifier, which uses linear mixed models for modelling and log-likelihood for decision making. To demonstrate the benefits of our approach, we compared it with existing methods in four simulated and three real data sets. In each simulated data set, LC was the most accurate method, while the real data sets further supported the robust performance of the method. LC is also computationally efficient and easy to use.

Klén Riku, Karhunen Markku, Elo Laura L

2020-Jan-23

Ophthalmology Ophthalmology

Semantic Segmentation of the Choroid in Swept Source Optical Coherence Tomography Images for Volumetrics.

In Scientific reports ; h5-index 158.0

The choroid is a complex vascular tissue that is covered with the retinal pigment epithelium. Ultra high speed swept source optical coherence tomography (SS-OCT) provides us with high-resolution cube scan images of the choroid. Robust segmentation techniques are required to reconstruct choroidal volume using SS-OCT images. For automated segmentation, the delineation of the choroidal-scleral (C-S) boundary is key to accurate segmentation. Low contrast of the boundary, scleral canals formed by the vessel and the nerve, and the posterior stromal layer, may cause segmentation errors. Semantic segmentation is one of the applications of deep learning used to classify the parts of images related to the meanings of the subjects. We applied semantic segmentation to choroidal segmentation and measured the volume of the choroid. The measurement results were validated through comparison with those of other segmentation methods. As a result, semantic segmentation was able to segment the C-S boundary and choroidal volume adequately.

Tsuji Shingo, Sekiryu Tetsuju, Sugano Yukinori, Ojima Akira, Kasai Akihito, Okamoto Masahiro, Eifuku Satoshi

2020-Jan-23

General General

Classification and Morphological Analysis of Vector Mosquitoes using Deep Convolutional Neural Networks.

In Scientific reports ; h5-index 158.0

Image-based automatic classification of vector mosquitoes has been investigated for decades for its practical applications such as early detection of potential mosquitoes-borne diseases. However, the classification accuracy of previous approaches has never been close to human experts' and often images of mosquitoes with certain postures and body parts, such as flatbed wings, are required to achieve good classification performance. Deep convolutional neural networks (DCNNs) are state-of-the-art approach to extracting visual features and classifying objects, and, hence, there exists great interest in applying DCNNs for the classification of vector mosquitoes from easy-to-acquire images. In this study, we investigated the capability of state-of-the-art deep learning models in classifying mosquito species having high inter-species similarity and intra-species variations. Since no off-the-shelf dataset was available capturing the variability of typical field-captured mosquitoes, we constructed a dataset with about 3,600 images of 8 mosquito species with various postures and deformation conditions. To further address data scarcity problems, we investigated the feasibility of transferring general features learned from generic dataset to the mosquito classification. Our result demonstrated that more than 97% classification accuracy can be achieved by fine-tuning general features if proper data augmentation techniques are applied together. Further, we analyzed how this high classification accuracy can be achieved by visualizing discriminative regions used by deep learning models. Our results showed that deep learning models exploit morphological features similar to those used by human experts.

Park Junyoung, Kim Dong In, Choi Byoungjo, Kang Woochul, Kwon Hyung Wook

2020-Jan-23

General General

Genome-scale transcriptional dynamics and environmental biosensing.

In Proceedings of the National Academy of Sciences of the United States of America ; h5-index 0.0

Genome-scale technologies have enabled mapping of the complex molecular networks that govern cellular behavior. An emerging theme in the analyses of these networks is that cells use many layers of regulatory feedback to constantly assess and precisely react to their environment. The importance of complex feedback in controlling the real-time response to external stimuli has led to a need for the next generation of cell-based technologies that enable both the collection and analysis of high-throughput temporal data. Toward this end, we have developed a microfluidic platform capable of monitoring temporal gene expression from over 2,000 promoters. By coupling the "Dynomics" platform with deep neural network (DNN) and associated explainable artificial intelligence (XAI) algorithms, we show how machine learning can be harnessed to assess patterns in transcriptional data on a genome scale and identify which genes contribute to these patterns. Furthermore, we demonstrate the utility of the Dynomics platform as a field-deployable real-time biosensor through prediction of the presence of heavy metals in urban water and mine spill samples, based on the the dynamic transcription profiles of 1,807 unique Escherichia coli promoters.

Graham Garrett, Csicsery Nicholas, Stasiowski Elizabeth, Thouvenin Gregoire, Mather William H, Ferry Michael, Cookson Scott, Hasty Jeff

2020-Jan-23

E. coli transcriptomics, biosensor, dynamics, explainable AI, high-throughput microfluidics

oncology Oncology

Image-Based Network Analysis of DNp73 Expression by Immunohistochemistry in Rectal Cancer Patients.

In Frontiers in physiology ; h5-index 0.0

Background: Rectal cancer is a disease characterized with tumor heterogeneity. The combination of surgery, radiotherapy, and chemotherapy can reduce the risk of local recurrence. However, there is a significant difference in the response to radiotherapy among rectal cancer patients even they have the same tumor stage. Despite rapid advances in knowledge of cellular functions affecting radiosensitivity, there is still a lack of predictive factors for local recurrence and normal tissue damage. The tumor protein DNp73 is thought as a biomarker in colorectal cancer, but its clinical significance is still not sufficiently investigated, mainly due to the limitation of human-based pathology analysis. In this study, we investigated the predictive value of DNp73 in patients with rectal adenocarcinoma using image-based network analysis. Methods: The fuzzy weighted recurrence network of time series was extended to handle multi-channel image data, and applied to the analysis of immunohistochemistry images of DNp73 expression obtained from a cohort of 25 rectal cancer patients who underwent radiotherapy before surgery. Two mathematical weighted network properties, which are the clustering coefficient and characteristic path length, were computed for the image-based networks of the primary tumor (obtained after operation) and biopsy (obtained before operation) of each cancer patient. Results: The ratios of two weighted recurrence network properties of the primary tumors to biopsies reveal the correlation of DNp73 expression and long survival time, and discover the non-effective radiotherapy to a cohort of rectal cancer patients who had short survival time. Conclusion: Our work contributes to the elucidation of the predictive value of DNp73 expression in rectal cancer patients who were given preoperative radiotherapy. Mathematical properties of fuzzy weighted recurrence networks of immunohistochemistry images are not only able to show the predictive factor of DNp73 expression in the patients, but also reveal the identification of non-effective application of radiotherapy to those who had poor overall survival outcome.

Pham Tuan D, Fan Chuanwen, Pfeifer Daniella, Zhang Hong, Sun Xiao-Feng

2019

DNp73, fuzzy weighted recurrence networks, immunohistochemistry, multi-channel images, network properties, predictive biomarker, rectal cancer, survival outcome

General General

A Multi-Omics Interpretable Machine Learning Model Reveals Modes of Action of Small Molecules.

In Scientific reports ; h5-index 158.0

High-throughput screening and gene signature analyses frequently identify lead therapeutic compounds with unknown modes of action (MoAs), and the resulting uncertainties can lead to the failure of clinical trials. We developed an approach for uncovering MoAs through an interpretable machine learning model of transcriptomics, epigenomics, metabolomics, and proteomics. Examining compounds with beneficial effects in models of Huntington's Disease, we found common MoAs for compounds with unrelated structures, connectivity scores, and binding targets. The approach also predicted highly divergent MoAs for two FDA-approved antihistamines. We experimentally validated these effects, demonstrating that one antihistamine activates autophagy, while the other targets bioenergetics. The use of multiple omics was essential, as some MoAs were virtually undetectable in specific assays. Our approach does not require reference compounds or large databases of experimental data in related systems and thus can be applied to the study of agents with uncharacterized MoAs and to rare or understudied diseases.

Patel-Murray Natasha L, Adam Miriam, Huynh Nhan, Wassie Brook T, Milani Pamela, Fraenkel Ernest

2020-Jan-22

Radiology Radiology

A Novel Deep Learning Approach with a 3D Convolutional Ladder Network for Differential Diagnosis of Idiopathic Normal Pressure Hydrocephalus and Alzheimer's Disease.

In Magnetic resonance in medical sciences : MRMS : an official journal of Japan Society of Magnetic Resonance in Medicine ; h5-index 0.0

PURPOSE : Idiopathic normal pressure hydrocephalus (iNPH) and Alzheimer's disease (AD) are geriatric diseases and common causes of dementia. Recently, many studies on the segmentation, disease detection, or classification of MRI using deep learning have been conducted. The aim of this study was to differentiate iNPH and AD using a residual extraction approach in the deep learning method.

METHODS : Twenty-three patients with iNPH, 23 patients with AD and 23 healthy controls were included in this study. All patients and volunteers underwent brain MRI with a 3T unit, and we used only whole-brain three-dimensional (3D) T1-weighted images. We designed a fully automated, end-to-end 3D deep learning classifier to differentiate iNPH, AD and control. We evaluated the performance of our model using a leave-one-out cross-validation test. We also evaluated the validity of the result by visualizing important areas in the process of differentiating AD and iNPH on the original input image using the Gradient-weighted Class Activation Mapping (Grad-CAM) technique.

RESULTS : Twenty-one out of 23 iNPH cases, 19 out of 23 AD cases and 22 out of 23 controls were correctly diagnosed. The accuracy was 0.90. In the Grad-CAM heat map, brain parenchyma surrounding the lateral ventricle was highlighted in about half of the iNPH cases that were successfully diagnosed. The medial temporal lobe or inferior horn of the lateral ventricle was highlighted in many successfully diagnosed cases of AD. About half of the successful cases showed nonspecific heat maps.

CONCLUSIONS : Residual extraction approach in a deep learning method achieved a high accuracy for the differential diagnosis of iNPH, AD, and healthy controls trained with a small number of cases.

Irie Ryusuke, Otsuka Yujiro, Hagiwara Akifumi, Kamagata Koji, Kamiya Kouhei, Suzuki Michimasa, Wada Akihiko, Maekawa Tomoko, Fujita Shohei, Kato Shimpei, Nakajima Madoka, Miyajima Masakazu, Motoi Yumiko, Abe Osamu, Aoki Shigeki

2020-Jan-22

Alzheimer’s disease, artificial intelligence, computer-aided diagnosis, deep learning, idiopathic normal pressure hydrocephalus

General General

Extracellular domains I-II of cell-surface glycoprotein CD44 mediate its trans-homophilic dimerization and tumor cluster aggregation.

In The Journal of biological chemistry ; h5-index 0.0

CD44 molecule (CD44) is a well-known surface glycoprotein on tumor-initiating cells or cancer stem cells. However, its utility as a therapeutic target for managing metastases remains to be  evaluated. We previously demonstrated that CD44 mediates homophilic interactions for circulating tumor cell (CTC) cluster formation, which enhances cancer stemness and metastatic potential in association with an unfavorable prognosis. Furthermore, CD44 self-interactions activate the P21-activated kinase 2 (PAK2) signaling pathway. Here, we further examined the biochemical properties of CD44 in homotypic tumor cell aggregation. The standard CD44 form (CD44s) mainly assembled as intercellular homodimers (trans-dimers) in tumor clusters rather than intracellular dimers (cis-dimers) present in single cells. Machine learning-based computational modeling combined with experimental mutagenesis tests revealed that the extracellular domains I and II of CD44 are essential for its trans-dimerization and predicted high-score residues to be required for dimerization. Substitutions of 10  residues in domain I (Ser-45, Glu-48, Phe-74, Cys-77, Arg-78, Tyr-79, Ile-88, Arg-90, Asn-94, and Cys-97) or 5 residues in domain II (Ile-106, Tyr-155, Val-156, Gln-157, and Lys-158) abolished CD44 dimerization and reduced tumor cell aggregation in vitro. Importantly, the substitutions in domain II dramatically inhibited lung colonization in mice. The CD44 dimer-disrupting substitutions decreased downstream PAK2 activation without affecting the interaction between CD44 and PAK2, suggesting that PAK2 activation in tumor cell clusters is CD44 trans-dimer-dependent. These results shed critical light on the biochemical mechanisms of CD44-mediated tumor cell cluster formation and may help inform the development of therapeutic strategies to prevent tumor cluster formation and block cluster-mediated metastases.

Kawaguchi Madoka, Dashzeveg Nurmaa, Cao Yue, Jia Yuzhi, Liu Xia, Shen Yang, Liu Huiping

2020-Jan-22

CD44, metastasis, mutagenesis, protein domain, protein-protein interaction

Public Health Public Health

Purpose in life is a robust protective factor of reported cognitive decline among late middle-aged adults: The Emory Healthy Aging Study.

In Journal of affective disorders ; h5-index 79.0

BACKGROUND : Cognitive abilities tend to decline in advanced age. A novel protective factor of cognitive decline in advanced age is purpose-in-life (PiL), a trait-like tendency to derive life meanings and purpose. However, whether PiL protects against cognitive decline in late-middle-age is unclear. Hence, we examined the association between PiL and perceived cognitive decline, one of the earliest detectable cognitive symptoms before the onset of cognitive impairment. Furthermore, we used a machine learning approach to investigate whether PiL is a robust predictor of cognitive decline when considered with the known protective and risk factors for cognition.

METHODS : PiL was assessed with a 10-item questionnaire and perceived cognitive decline with the Cognitive Function Instrument among 5,441 Emory Healthy Aging Study participants, whose mean age was 63 and 51% were employed. Association between PiL and perceived cognitive decline was examined with linear regression adjusting for relevant confounding factors. Elastic Net was performed to identify the most robust predictors of cognitive decline.

RESULTS : Greater PiL was associated with less perceived cognitive decline after adjusting for the relevant factors. Furthermore, Elastic Net modeling suggested that PiL is a robust predictor of cognitive decline when considered simultaneously with known protective (education, exercise, enrichment activities) and risk factors for cognition (depression, anxiety, diagnosed medical, mental health problems, smoking, alcohol use, family history of dementia, and others).

LIMITATION : This is a cross-sectional study.

CONCLUSIONS : PiL is a robust protective factor of perceived cognitive decline observed as early as middle age. Thus, interventions to enhance PiL merit further investigation.

Wingo Aliza P, Wingo Thomas S, Fan Wen, Bergquist Sharon, Alonso Alvaro, Marcus Michele, Levey Allan I, Lah James J

2019-Nov-30

General General

A curated benchmark of enhancer-gene interactions for evaluating enhancer-target gene prediction methods.

In Genome biology ; h5-index 114.0

BACKGROUND : Many genome-wide collections of candidate cis-regulatory elements (cCREs) have been defined using genomic and epigenomic data, but it remains a major challenge to connect these elements to their target genes.

RESULTS : To facilitate the development of computational methods for predicting target genes, we develop a Benchmark of candidate Enhancer-Gene Interactions (BENGI) by integrating the recently developed Registry of cCREs with experimentally derived genomic interactions. We use BENGI to test several published computational methods for linking enhancers with genes, including signal correlation and the TargetFinder and PEP supervised learning methods. We find that while TargetFinder is the best-performing method, it is only modestly better than a baseline distance method for most benchmark datasets when trained and tested with the same cell type and that TargetFinder often does not outperform the distance method when applied across cell types.

CONCLUSIONS : Our results suggest that current computational methods need to be improved and that BENGI presents a useful framework for method development and testing.

Moore Jill E, Pratt Henry E, Purcaro Michael J, Weng Zhiping

2020-Jan-22

Benchmark, Enhancer, Genomic interactions, Machine learning, Target gene, Transcriptional regulation

General General

Correction to: Effective machine-learning assembly for next-generation amplicon sequencing with very low coverage.

In BMC bioinformatics ; h5-index 0.0

Following publication of the original article [1], the author reported that there are several errors in the original article.

Ranjard Louis, Wong Thomas K F, Rodrigo Allen G

2020-Jan-22

General General

Volleyball-Specific Skills and Cognitive Functions Can Discriminate Players of Different Competitive Levels.

In Journal of strength and conditioning research ; h5-index 0.0

Formenti, D, Trecroci, A, Duca, M, Vanoni, M, Ciovati, M, Rossi, A, and Alberti, G. Volleyball-specific skills and cognitive functions can discriminate players of different competitive levels. J Strength Cond Res XX(X): 000-000, 2020-The aim of this study was to investigate whether volleyball-specific skills, physical performance, and general cognitive functions differ between players of different competitive levels. Twenty-six female volleyball players competing at 2 different levels (n = 13, regional; n = 13, provincial) were tested on volleyball-specific skills (accuracy and technique of setting, passing, spiking, and serving), change of direction speed (COD) by the modified T-test, countermovement jump (CMJ) and general cognitive functions (executive control by Flanker task and perceptual speed by visual search task). Four machine learning models were tested to detect the best one to predict players' level. Regional players presented higher passing, spiking, serving accuracy (p < 0.05) and setting, passing, spiking, and serving technique (p < 0.05) than provincial players. Regional players had also better performance in COD and CMJ than provincial players (p < 0.05). Regional players presented lower response time than provincial players in both congruent and incongruent conditions of the Flanker task, and in both 10 items and 15 items conditions of the visual search task (p < 0.05). Decision tree classifier was the machine learning model with the highest performance to discriminate regional and provincial players (93% precision and 73% recall) by considering passing technique, congruent and incongruent condition of the Flanker task, 15 items and 10 items condition of the visual search task, and spiking technique. These findings demonstrated the importance of assessing volleyball-specific skills and cognitive functions as playing a role to discriminate players of different competitive levels.

Formenti Damiano, Trecroci Athos, Duca Marco, Vanoni Marta, Ciovati Miriam, Rossi Alessio, Alberti Giampietro

2020-Jan-16

Surgery Surgery

Automatic detection of perforators for microsurgical reconstruction.

In Breast (Edinburgh, Scotland) ; h5-index 0.0

The deep inferior epigastric perforator (DIEP) is the most commonly used free flap in mastectomy reconstruction. Preoperative imaging techniques are routinely used to detect location, diameter and course of perforators, with direct intervention from the imaging team, who subsequently draw a chart that will help surgeons choosing the best vascular support for the reconstruction. In this work, the feasibility of using a computer software to support the preoperative planning of 40 patients proposed for breast reconstruction with a DIEP flap is evaluated for the first time. Blood vessel centreline extraction and local characterization algorithms are applied to identify perforators and compared with the manual mapping, aiming to reduce the time spent by the imaging team, as well as the inherent subjectivity to the task. Comparing with the measures taken during surgery, the software calibre estimates were worse for vessels smaller than 1.5 mm (P = 6e-4) but better for the remaining ones (P = 2e-3). Regarding vessel location, the vertical component of the software output was significantly different from the manual measure (P = 0.02), nonetheless that was irrelevant during surgery as errors in the order of 2-3 mm do not have impact in the dissection step. Our trials support that a reduction of the time spent is achievable using the automatic tool (about 2 h/case). The introduction of artificial intelligence in clinical practice intends to simplify the work of health professionals and to provide better outcomes to patients. This pilot study paves the way for a success story.

Mavioso Carlos, Araújo Ricardo J, Oliveira Hélder P, Anacleto João C, Vasconcelos Maria Antónia, Pinto David, Gouveia Pedro F, Alves Celeste, Cardoso Fátima, Cardoso Jaime S, Cardoso Maria João

2020-Jan-12

Automatic detection, Computer vision, DIEP, Flap, Image analysis, Microsurgery, Perforators, Pre-operative mapping

General General

Sleep heart rate variability assists the automatic prediction of long-term cardiovascular outcomes.

In Sleep medicine ; h5-index 0.0

OBJECTIVE : We aimed to investigate the association between sleep HRV and long-term cardiovascular disease (CVD) outcomes, and further explore whether HRV features can assist the automatic CVD prediction.

METHODS : We retrospectively analyzed polysomnography (PSG) data obtained from 2111 participants in the Sleep Heart Health Study, who were followed up for a median of 11.8 years after PSG acquisition. During follow-up, 1252 participants suffered CVD events (CVD group) and 859 participants remained CVD-free (non-CVD group). HRV measures, derived from time-domain and frequency-domain, were calculated. Regression models were created to determine the independent predictor for long-term CVD outcomes, and to explore the association between HRV and CVD latency. Furthermore, based on HRV and other clinical features, a model was trained to automatically predict CVD outcomes using the eXtreme Gradient Boosting algorithm.

RESULTS : Compared with the non-CVD group, decreased HRV during sleep was found in the CVD group. HRV, particularly its component of high frequency (HF), was demonstrated to be independent predictor of CVD outcomes. Moreover, normalized HF was positively correlated with CVD latency. The proposed prediction model achieved a total accuracy of 75.3%, in which sleep HRV features served as a supplement to the well-recognized CVD risk factors, such as aging, adiposity and sleep disorders.

CONCLUSIONS : Association between sleep HRV and long-term CVD outcomes was demonstrated here, suggesting that altered HRV during sleep might occur many years prior to the onset of CVD. Machine learning models, combining sleep HRV and other clinical characteristics, should be promising in the early prediction of CVD outcomes.

Zhang Lulu, Wu Huili, Zhang Xiangyu, Wei Xinfa, Hou Fengzhen, Ma Yan

2019-Dec-16

Cardiovascular diseases, Heart rate variability, Machine learning, Sleep

General General

Can machine learning account for human visual object shape similarity judgments?

In Vision research ; h5-index 38.0

We describe and analyze the performance of metric learning systems, including deep neural networks (DNNs), on a new dataset of human visual object shape similarity judgments of naturalistic, part-based objects known as "Fribbles". In contrast to previous studies which asked participants to judge similarity when objects or scenes were rendered from a single viewpoint, we rendered Fribbles from multiple viewpoints and asked participants to judge shape similarity in a viewpoint-invariant manner. Metrics trained using pixel-based or DNN-based representations fail to explain our experimental data, but a metric trained with a viewpoint-invariant, part-based representation produces a good fit. We also find that although neural networks can learn to extract the part-based representation-and therefore should be capable of learning to model our data-networks trained with a "triplet loss" function based on similarity judgments do not perform well. We analyze this failure, providing a mathematical description of the relationship between the metric learning objective function and the triplet loss function. The poor performance of neural networks appears to be due to the nonconvexity of the optimization problem in network weight space. We conclude that viewpoint insensitivity is a critical aspect of human visual shape perception, and that neural network and other machine learning methods will need to learn viewpoint-insensitive representations in order to account for people's visual object shape similarity judgments.

German Joseph Scott, Jacobs Robert A

2020-Jan-20

General General

Improving cardiac MRI convolutional neural network segmentation on small training datasets and dataset shift: A continuous kernel cut approach.

In Medical image analysis ; h5-index 0.0

Cardiac magnetic resonance imaging (MRI) provides a wealth of imaging biomarkers for cardiovascular disease care and segmentation of cardiac structures is required as a first step in enumerating these biomarkers. Deep convolutional neural networks (CNNs) have demonstrated remarkable success in image segmentation but typically require large training datasets and provide suboptimal results that require further improvements. Here, we developed a way to enhance cardiac MRI multi-class segmentation by combining the strengths of CNN and interpretable machine learning algorithms. We developed a continuous kernel cut segmentation algorithm by integrating normalized cuts and continuous regularization in a unified framework. The high-order formulation was solved through upper bound relaxation and a continuous max-flow algorithm in an iterative manner using CNN predictions as inputs. We applied our approach to two representative cardiac MRI datasets across a wide range of cardiovascular pathologies. We comprehensively evaluated the performance of our approach for two CNNs trained with various small numbers of training cases, tested on the same and different datasets. Experimental results showed that our approach improved baseline CNN segmentation by a large margin, reduced CNN segmentation variability substantially, and achieved excellent segmentation accuracy with minimal extra computational cost. These results suggest that our approach provides a way to enhance the applicability of CNN by enabling the use of smaller training datasets and improving the segmentation accuracy and reproducibility for cardiac MRI segmentation in research and clinical patient care.

Guo Fumin, Ng Matthew, Goubran Maged, Petersen Steffen E, Piechnik Stefan K, Neubauer Stefan, Wright Graham

2020-Jan-11

Cardiac MRI segmentation, Continuous max-flow, Convex optimization, Normalized cuts

General General

Estimation of high frequency nutrient concentrations from water quality surrogates using machine learning methods.

In Water research ; h5-index 0.0

Continuous high frequency water quality monitoring is becoming a critical task to support water management. Despite the advancements in sensor technologies, certain variables cannot be easily and/or economically monitored in-situ and in real time. In these cases, surrogate measures can be used to make estimations by means of data-driven models. In this work, variables that are commonly measured in-situ are used as surrogates to estimate the concentrations of nutrients in a rural catchment and in an urban one, making use of machine learning models, specifically Random Forests. The results are compared with those of linear modelling using the same number of surrogates, obtaining a reduction in the Root Mean Squared Error (RMSE) of up to 60.1%. The profit from including up to seven surrogate sensors was computed, concluding that adding more than 4 and 5 sensors in each of the catchments respectively was not worthy in terms of error improvement.

Castrillo María, García Álvaro López

2020-Jan-11

Machine learning, Random forests, Soft-sensors, Surrogate parameters, Water monitoring, Water quality

Dermatology Dermatology

Melanoma recognition by a deep learning convolutional neural network-Performance in different melanoma subtypes and localisations.

In European journal of cancer (Oxford, England : 1990) ; h5-index 0.0

BACKGROUND : Deep learning convolutional neural networks (CNNs) show great potential for melanoma diagnosis. Melanoma thickness at diagnosis among others depends on melanoma localisation and subtype (e.g. advanced thickness in acrolentiginous or nodular melanomas). The question whether CNN may counterbalance physicians' diagnostic difficulties in these melanomas has not been addressed. We aimed to investigate the diagnostic performance of a CNN with approval for the European market across different melanoma localisations and subtypes.

METHODS : The current market version of a CNN (Moleanalyzer-Pro®, FotoFinder Systems GmbH, Bad Birnbach, Germany) was used for classifications (malignant/benign) in six dermoscopic image sets. Each set included 30 melanomas and 100 benign lesions of related localisations and morphology (set-SSM: superficial spreading melanomas and macular nevi; set-LMM: lentigo maligna melanomas and facial solar lentigines/seborrhoeic keratoses/nevi; set-NM: nodular melanomas and papillomatous/dermal/blue nevi; set-Mucosa: mucosal melanomas and mucosal melanoses/macules/nevi; set-AMskin: acrolentiginous melanomas and acral (congenital) nevi; set-AMnail: subungual melanomas and subungual (congenital) nevi/lentigines/ethnical type pigmentations).

RESULTS : The CNN showed a high-level performance in set-SSM, set-NM and set-LMM (sensitivities >93.3%, specificities >65%, receiver operating characteristics-area under the curve [ROC-AUC] >0.926). In set-AMskin, the sensitivity was lower (83.3%) at a high specificity (91.0%) and ROC-AUC (0.928). A limited performance was found in set-mucosa (sensitivity 93.3%, specificity 38.0%, ROC-AUC 0.754) and set-AMnail (sensitivity 53.3%, specificity 68.0%, ROC-AUC 0.621).

CONCLUSIONS : The CNN may help to partly counterbalance reduced human accuracies. However, physicians need to be aware of the CNN's limited diagnostic performance in mucosal and subungual lesions. Improvements may be expected from additional training images of mucosal and subungual sites.

Winkler Julia K, Sies Katharina, Fink Christine, Toberer Ferdinand, Enk Alexander, Deinlein Teresa, Hofmann-Wellenhof Rainer, Thomas Luc, Lallas Aimilios, Blum Andreas, Stolz Wilhelm, Abassi Mohamed S, Fuchs Tobias, Rosenberger Albert, Haenssle Holger A

2020-Jan-20

Convolutional neural network, Deep learning, Dermoscopy, Melanoma, Nevi

General General

MMPdb and MitoPredictor: tools for facilitating comparative analysis of animal mitochondrial proteomes.

In Mitochondrion ; h5-index 0.0

Comparative analysis of animal mitochondrial proteomes faces two challenges: the scattering of data on experimentally-characterized animal mitochondrial proteomes across several databases, and the lack of data on mitochondrial proteomes from the majority of metazoan lineages. In this study, we developed two resources to address these challenges: 1] the Metazoan Mitochondrial Proteome Database (MMPdb), which consolidates data on experimentally-characterized mitochondrial proteomes of vertebrate and invertebrate model organisms, and 2] MitoPredictor, a novel machine-learning tool for prediction of mitochondrial proteins in animals. MMPdb allows comparative analysis of animal mitochondrial proteomes by integrating results from orthology analysis, prediction of mitochondrial targeting signals, protein domain analysis, and Gene Ontology analysis. Additionally, for mammalian mitochondrial proteins, MMPdb includes experimental evidence of localization from MitoMiner and the Human Protein Atlas. MMPdb is publicly available at https://mmpdb.eeob.iastate.edu/. MitoPredictor is a Random Forest classifier which uses orthology, mitochondrial targeting signal prediction and protein domain content to predict mitochondrial proteins in animals.

Muthye Viraj, Kandoi Gaurav, Lavrov Dennis

2020-Jan-20

Database, Machine learning, Mitochondria, Proteome, Random Forest

Public Health Public Health

A reliable time-series method for predicting arthritic disease outcomes: New step from regression toward a nonlinear artificial intelligence method.

In Computer methods and programs in biomedicine ; h5-index 0.0

BACKGROUND AND OBJECTIVE : The interrupted time-series (ITS) concept is performed using linear regression to evaluate the impact of policy changes in public health at a specific time. Objectives of this study were to verify, with an artificial intelligence-based nonlinear approach, if the estimation of ITS data could be facilitated, in addition to providing a computationally explicit equation.

METHODS : Dataset were from a study of Hawley et al. (2018) in which they evaluated the impact of UK National Institute for Health and Care Excellence (NICE) approval of tumor necrosis factor inhibitor therapies on the incidence of total hip (THR) and knee (TKR) replacement in rheumatoid arthritis patients. We used the newly developed Generalized Structure Group Method of Data Handling (GS-GMDH) model, a nonlinear method, for the prediction of THR and TKR incidence in the abovementioned population.

RESULTS : In contrast to linear regression, the GS-GMDH yields for both THR and TKR prediction values that almost fitted with the measured ones. These models demonstrated a low mean absolute relative error (0.10 and 0.09 respectively) and high correlation coefficient values (0.98 and 0.78). The GS-GMDH model for THR demonstrated 6.4/1000 person years (PYs) at the mid-point of the linear regression line post-NICE, whereas at the same point linear regression is 4.12/1000 PYs, a difference of around 35%. Similarly for the TKR, the linear regression to the datasets post-NICE was 9.05/1000 PYs, which is lower by about 27% than the GS-GMDH values of 12.47/1000 PYs. Importantly, with the GS-GMDH models, there is no need to identify the change point and intervention lag time as they simulate ITS continually throughout modelling.

CONCLUSIONS : The results demonstrate that in the medical field, when looking at the estimation of the impact of a new drug using ITS, a nonlinear GS-GMDH method could be used as a better alternative to regression-based methods data processing. In addition to yielding more accurate predictions and requiring less time-consuming experimental measurements, this nonlinear method addresses, for the first time, one of the most challenging tasks in ITS modelling, i.e. avoiding the need to identify the change point and intervention lag time.

Bonakdari Hossein, Pelletier Jean-Pierre, Martel-Pelletier Johanne

2020-Jan-09

Artificial intelligence, GS-GMDH, Hip replacement, Interrupted time-series, Knee replacement, Rheumatoid arthritis

General General

The association between hospital ownership and postoperative complications: Does it matter who owns the hospital?

In Health informatics journal ; h5-index 25.0

Postoperative complications place a major burden on the healthcare systems. The type of hospital's ownership could be one factor associated with this adverse outcome. Using CMS's publicly available "Complications and Deaths-Hospitals" and "Hospital General Information" datasets, we analyzed the association between four postoperative complications (venous thromboembolism, joint replacement complications, wound dehiscence, postoperative sepsis) and hospital ownership. These data were collected by Medicare between April 2013 and March 2016. We found a significant association (p = 0.029) between ownership types and the postoperative complication score. A 6-percent drop in the share of not-for-profit ownership, accompanied by a 3-percent increase in each of the government and for-profit ownership, resulted in a 20-percent drop in postoperative complication scores (from 5.75 to 4.6). There is an association between hospital ownership type and postoperative complications. Creating this awareness in leadership should prompt for redesigning of hospitals' operations and workflows to become more compatible with safe and effective care delivery.

Atala Robby, Kroth Philip J

2020-Jan-23

for-profit, government hospitals, not-for-profit, postoperative complications, unsupervised machine learning

Dermatology Dermatology

Characterizing the Role of Dermatologists in Developing AI for Assessment of Skin Cancer: A Systematic Review.

In Journal of the American Academy of Dermatology ; h5-index 79.0

BACKGROUND : The use of artificial intelligence (AI) for skin cancer assessment has been an emerging topic in dermatology. Leadership of dermatologists is necessary in defining how these technologies fit into clinical practice.

OBJECTIVE : To characterize the evolution of AI in skin cancer assessment and characterize the involvement of dermatologists in developing these technologies.

METHODS : An electronic literature search was performed using PubMed searching machine learning or artificial intelligence combined with skin cancer or melanoma. Articles were included if they used AI for screening and diagnosis of skin cancer using datasets consisting of dermatoscopic images or photographs of gross lesions.

RESULTS : Fifty-one articles were included, of which 41% had dermatologists included as authors. Manuscripts including dermatologists described algorithms built using more images (mean 12111 vs 660). In terms of underlying technology, AI used for skin cancer assessment has followed trends in the field of image recognition.

LIMITATIONS : This review focused on models described in the medical literature and did not account for those described elsewhere.

CONCLUSIONS : Greater involvement of dermatologists is needed in thinking through issues in data collection, dataset biases, and applications of technology. Dermatologists can provide access to large, diverse datasets that are increasingly important for building these models.

Zakhem George A, Fakhoury Joseph W, Motosko Catherine C, Ho Roger S

2020-Jan-20

Pathology Pathology

Rapid, label-free optical spectroscopy platform for diagnosis of heparin-induced thrombocytopenia.

In Angewandte Chemie (International ed. in English) ; h5-index 0.0

In this study, we propose the use of surface-enhanced Raman spectroscopy (SERS) to determine spectral markers that can aid in the recognition of heparin-induced thrombocytopenia (HIT), a difficult-to-diagnose immune-related complication that often leads to limb ischemia and thromboembolism . The ability to produce distinct molecular signatures without requiring addition of exogenous labels enables unbiased inquiry and makes SERS an attractive complementary diagnostic tool for various complex pathologies. Specifically, we have designed a new plasmonic capillary flow platform that offers ultrasensitive , label-free measurement capability as well as efficient handling of blood serum samples. The optimized capillary channel shows excellent reproducibility and long-term stability and, crucially, provides an alternative diagnostic rubric for determination of HIT by leveraging machine-learning based classification of the spectroscopic data. With further refinement, we envision that a portable Raman instrument could be combined with the capillary-based SERS analytical tool for rapid, non-destructive determination of HIT in the clinical laboratory, without perturbing the existing diagnostic workflow.

Huang Zufang, Siddhanta Soumik, Zheng Gang, Kickler Thomas, Barman Ishan

2020-Jan-23

Blood, Heparin-induced Thrombocytopenia, Nanotechnology, Raman spectroscopy, chemometrics

General General

Empirical Investigation of Factors Influencing Consumer Intention to Use an Artificial Intelligence-Powered Mobile Application for Weight Loss and Health Management.

In Telemedicine journal and e-health : the official journal of the American Telemedicine Association ; h5-index 0.0

Background: Research into interventions based on mobile health (m-Health) applications (apps) has attracted considerable attention among researchers; however, most previous studies have focused on research-led apps and their effectiveness when applied to overweight/obese adults. There remains a paucity of research on the attitudes of typical consumers toward the adoption of m-Health apps for weight management. This study adopted the tenets of the extended unified theory of acceptance and use of technology 2 (UTAUT2) as the theoretical foundation in developing a model that integrates personal innovativeness (PI) and network externality (NE) in seeking to identify the factors with the most pronounced effect on one's intention to use an artificial intelligence-powered weight loss and health management app. Materials and Methods: An online survey was conducted for Taiwanese participants aged ≥21 years from May 23 to June 30, 2018. Hypotheses were tested using structural equation modeling. Results: In the analysis of 458 responses, the proposed research model explained 75.5% of variance in behavioral intention (BI). Habit was the independent variable with the strongest performance in predicting user intention, followed by PI, NE, and performance expectancy (PE). Social influence weakly affects user intention through PE. In multi-group analysis, education was shown to exert a moderating influence on some of the relationships hypothesized in the model. Conclusions: The empirically validated model in this study provides insights into the primary determinants of user intention toward the adoption of m-Health app for weight loss and health management. The theoretical and practical implications are relevant to researchers seeking to extend the applicability of the UTAUT2 model to health apps as well as practitioners seeking to promote the adoption of m-Health apps. In the future, researchers could extend the model to assess the effects of BI on actual use behavior.

Huang Chin-Yuan, Yang Ming-Chin

2020-Jan-22

UTAUT2, artificial intelligence, mobile health, network externality, personal innovativeness, telemedicine

oncology Oncology

Generalizable sgRNA design for improved CRISPR/Cas9 editing efficiency.

In Bioinformatics (Oxford, England) ; h5-index 0.0

MOTIVATION : The development of CRISPR/Cas9 technology has provided a simple yet powerful system for targeted genome editing. In recent years, this system has been widely used for various gene editing applications. The CRISPR editing efficacy is mainly dependent on the sgRNA, which guides Cas9 for genome cleavage. While there have been multiple attempts at improving sgRNA design, there is a pressing need for greater sgRNA potency and generalizability across various experimental conditions.

RESULTS : We employed a unique plasmid library expressed in human cells to quantify the potency of thousands of CRISPR/Cas9 sgRNAs. Differential sequence and structural features among the most and least potent sgRNAs were then used to train a machine learning algorithm for assay design. Comparative analysis indicates that our new algorithm outperforms existing CRISPR/Cas9 sgRNA design tools.

AVAILABILITY : The new sgRNA design tool is freely accessible as a web application via http://crispr.wustl.edu.

SUPPLEMENTARY INFORMATION : Supplementary data are available at Bioinformatics online.

Hiranniramol Kasidet, Chen Yuhao, Liu Weijun, Wang Xiaowei

2020-Jan-23

General General

Comparative Analysis of Classification Methods with PCA and LDA for Diabetes.

In Current diabetes reviews ; h5-index 0.0

BACKGROUND : The modern society is extremely prone to many life-threatening diseases, which can be easily controlled as well as cured if diagnosed at an early stage. The development and implementation of a disease diagnostic system have gained huge popularity over the years. In the current scenario, there are certain factors such as environment, sedentary lifestyle, genetic (hereditary) are the major factors behind the life threatening disease such as, 'diabetes'. Moreover, diabetes has achieved the status of the modern man's leading chronic disease. So one of the prime need of this generation is to develop a state-of-the-art expert system which can predict diabetes at a very early stage with a minimum of complexity and in an expedited manner. The primary objective of this research work is to develop an indigenous and efficient diagnostic technique for detection of the diabetes. Method & Discussion: The proposed methodology comprises of two phases: In the first phase The Pima Indian Diabetes Dataset (PIDD) has been collected from the UCI machine learning repository databases and Localized Diabetes Dataset (LDD) has been gathered from Bombay Medical Hall, Upper Bazar Ranchi, Jharkhand, India. In the second phase, the dataset has been processed through two different approaches. The first approach entails classification through Adaboost, Classification Via Regression (CVR), Radial Basis Function Network (RBFN), K-Nearest Neighbor (KNN) on Pima Indian Diabetes Dataset and Localized Diabetes Dataset. In the second approach, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) have been applied as a feature reduction method followed by using the same set of classification methods used in the first approach. Among all of the implemented classification method, PCA_CVR performs the highest performance for both the above mentioned dataset.

CONCLUSION : In this research article, comparative analysis of outcomes obtained by with and without the use of PCA and LDA for the same set of classification method has been done w.r.t performance assessment. Finally, it has been concluded that PCA & LDA both is useful to remove the insignificant features, decreasing the expense and computation time while improving the ROC and accuracy. The used methodology may similarly be applied in other medical diseases.

Choubey Dilip Kumar, Kumar Manish, Shukla Vaibhav, Tripathi Sudhakar, Dhandhania Vinay Kumar

2020-Jan-23

Adaboost, CVR, Classification, Feature Reduction, KNN, LDA, Localized Diabetes Dataset, PCA, Pima Indian Diabetes Dataset, RBF N

General General

Neural hierarchical models of ecological populations.

In Ecology letters ; h5-index 0.0

Neural networks are increasingly being used in science to infer hidden dynamics of natural systems from noisy observations, a task typically handled by hierarchical models in ecology. This article describes a class of hierarchical models parameterised by neural networks - neural hierarchical models. The derivation of such models analogises the relationship between regression and neural networks. A case study is developed for a neural dynamic occupancy model of North American bird populations, trained on millions of detection/non-detection time series for hundreds of species, providing insights into colonisation and extinction at a continental scale. Flexible models are increasingly needed that scale to large data and represent ecological processes. Neural hierarchical models satisfy this need, providing a bridge between deep learning and ecological modelling that combines the function representation power of neural networks with the inferential capacity of hierarchical models.

Joseph Maxwell B

2020-Jan-23

Deep learning, hierarchical model, neural network, occupancy

Dermatology Dermatology

Optical coherence tomography image de-noising using a generative adversarial network with speckle modulation.

In Journal of biophotonics ; h5-index 0.0

Optical coherence tomography (OCT) is widely used for biomedical imaging and clinical diagnosis. However, speckle noise is a key factor affecting OCT image quality. Here, we developed a custom generative adversarial network (GAN) to de-noise OCT images. A speckle-modulating OCT (SM-OCT) was built to generate low speckle images to be used as the ground truth. 210 000 SM-OCT images were used for training and validating the neural network model, which we call SM-GAN. The performance of the SM-GAN method was further demonstrated using online benchmark retinal images, 3D OCT images acquired from human fingers and OCT videos of a beating fruit fly heart. The denoise performance of the SM-GAN model was compared to traditional OCT de-noising methods and other state-of-the-art deep learning based denoise networks. We conclude that the SM-GAN model presented here can effectively reduce speckle noise in OCT images and videos while maintaining spatial and temporal resolutions. This article is protected by copyright. All rights reserved.

Dong Zhao, Liu Guoyan, Ni Guangming, Jerwick Jason, Duan Lian, Zhou Chao

2020-Jan-22

De-noise, Deep learning, Generative Adversarial Network, Optical Coherence Tomography

Pathology Pathology

Antioxidant and Anti-inflammatory Diagnostic Biomarkers in Multiple Sclerosis: A Machine Learning Study.

In Molecular neurobiology ; h5-index 0.0

An imbalance of inflammatory/anti-inflammatory and oxidant/antioxidant molecules has been implicated in the demyelination and axonal damage in multiple sclerosis (MS). The current study aimed to evaluate the plasma levels of tumor necrosis factor (TNF)-α, soluble TNF receptor (sTNFR)1, sTNFR2, adiponectin, hydroperoxides, advanced oxidation protein products (AOPP), nitric oxide metabolites, total plasma antioxidant capacity using the total radical-trapping antioxidant parameter (TRAP), sulfhydryl (SH) groups, as well as serum levels of zinc in 174 MS patients and 182 controls. The results show that MS is characterized by lowered levels of zinc, adiponectin, TRAP, and SH groups and increased levels of AOPP. MS was best predicted by a combination of lowered levels of zinc, adiponectin, TRAP, and SH groups yielding an area under the receiver operating characteristic (AUC/ROC) curve of 0.986 (±0.005). The combination of these four antioxidants with sTNFR2 showed an AUC/ROC of 0.997 and TRAP, adiponectin, and zinc are the most important biomarkers for MS diagnosis followed at a distance by sTNFR2. Support vector machine with tenfold validation performed on the four antioxidants showed a training accuracy of 92.9% and a validation accuracy of 90.6%. The results indicate that lowered levels of those four antioxidants are associated with MS and that these antioxidants are more important biomarkers of MS than TNF-α signaling and nitro-oxidative biomarkers. Adiponectin, TRAP, SH groups, zinc, and sTNFR2 play a role in the pathophysiology of MS, and a combination of these biomarkers is useful for predicting MS with high sensitivity, specificity, and accuracy. Drugs that increase the antioxidant capacity may offer novel therapeutic opportunities for MS.

Mezzaroba Leda, Simão Andrea Name Colado, Oliveira Sayonara Rangel, Flauzino Tamires, Alfieri Daniela Frizon, de Carvalho Jennings Pereira Wildea Lice, Kallaur Ana Paula, Lozovoy Marcell Alysson Batisti, Kaimen-Maciel Damacio Ramón, Maes Michael, Reiche Edna Maria Vissoci

2020-Jan-22

Adiponectin, Machine learning study, Multiple sclerosis, Oxidative stress, TNF receptors, Zinc

General General

Quantifying the Collision Dose in Rugby League: A Systematic Review, Meta-analysis, and Critical Analysis.

In Sports medicine - open ; h5-index 0.0

BACKGROUND : Collisions (i.e. tackles, ball carries, and collisions) in the rugby league have the potential to increase injury risk, delay recovery, and influence individual and team performance. Understanding the collision demands of the rugby league may enable practitioners to optimise player health, recovery, and performance.

OBJECTIVE : The aim of this review was to (1) characterise the dose of collisions experienced within senior male rugby league match-play and training, (2) systematically and critically evaluate the methods used to describe the relative and absolute frequency and intensity of collisions, and (3) provide recommendations on collision monitoring.

METHODS : A systematic search of electronic databases (PubMed, SPORTDiscus, Scopus, and Web of Science) using keywords was undertaken. A meta-analysis provided a pooled mean of collision frequency or intensity metrics on comparable data sets from at least two studies.

RESULTS : Forty-three articles addressing the absolute (n) or relative collision frequency (n min-1) or intensity of senior male rugby league collisions were included. Meta-analysis of video-based studies identified that forwards completed approximately twice the number of tackles per game than backs (n = 24.6 vs 12.8), whilst ball carry frequency remained similar between backs and forwards (n = 11.4 vs 11.2). Variable findings were observed at the subgroup level with a limited number of studies suggesting wide-running forwards, outside backs, and hit-up forwards complete similar ball carries whilst tackling frequency differed. For microtechnology, at the team level, players complete an average of 32.7 collisions per match. Limited data suggested hit-up and wide-running forwards complete the most collisions per match, when compared to adjustables and outside backs. Relative to playing time, forwards (n min-1 = 0.44) complete a far greater frequency of collision than backs (n min-1 = 0.16), with data suggesting hit-up forwards undertake more than adjustables, and outside backs. Studies investigating g force intensity zones utilised five unique intensity schemes with zones ranging from 2-3 g to 13-16 g. Given the disparity between device setups and zone classification systems between studies, further analyses were inappropriate. It is recommended that practitioners independently validate microtechnology against video to establish criterion validity.

CONCLUSIONS : Video- and microtechnology-based methods have been utilised to quantify collisions in the rugby league with differential collision profiles observed between forward and back positional groups, and their distinct subgroups. The ball carry demands of forwards and backs were similar, whilst tackle demands were greater for forwards than backs. Microtechnology has been used inconsistently to quantify collision frequency and intensity. Despite widespread popularity, a number of the microtechnology devices have yet to be appropriately validated. Limitations exist in using microtechnology to quantify collision intensity, including the lack of consistency and limited validation. Future directions include application of machine learning approaches to differentiate types of collisions in microtechnology datasets.

Naughton Mitchell, Jones Ben, Hendricks Sharief, King Doug, Murphy Aron, Cummins Cloe

2020-Jan-22

Global Positioning system, Microtechnology, Rugby, Tackle

Radiology Radiology

Ovarian torsion: developing a machine-learned algorithm for diagnosis.

In Pediatric radiology ; h5-index 0.0

BACKGROUND : Ovarian torsion is a common concern in girls presenting to emergency care with pelvic or abdominal pain. The diagnosis is challenging to make accurately and quickly, relying on a combination of physical exam, history and radiologic evaluation. Failure to establish the diagnosis in a timely fashion can result in irreversible ovarian ischemia with implications for future fertility. Ultrasound is the mainstay of evaluation for ovarian torsion in the pediatric population. However, even with a high index of suspicion, imaging features are not pathognomonic.

OBJECTIVE : We sought to develop an algorithm to aid radiologists in diagnosing ovarian torsion using machine learning from sonographic features and to evaluate the frequency of each sonographic element.

MATERIALS AND METHODS : All pediatric patients treated for ovarian torsion at a quaternary pediatric hospital over an 11-year period were identified by both an internal radiology database and hospital-based International Statistical Classification of Diseases and Related Health Problems (ICD) code review. Inclusion criteria were surgical confirmation of ovarian torsion and available imaging. Patients were excluded if the diagnosis could not be confirmed, no imaging was available for review, the ovary was not identified by imaging, or torsion involved other adnexal structures but spared the ovary. Data collection included: patient age; laterality of torsion; bilateral ovarian volumes; torsed ovarian position, i.e. whether medialized with respect to the mid-uterine line; presence or absence of Doppler signal within the torsed ovary; visualization of peripheral follicles; and presence of a mass or cyst, and free peritoneal fluid. Subsequently, we evaluated a non-torsed control cohort from April 2015 to May 2016. This cohort consisted of sequential girls and young adults presenting to the emergency department with abdominopelvic symptoms concerning for ovarian torsion but who were ultimately diagnosed otherwise. These features were then fed into supervised machine learning systems to identify and develop viable decision algorithms. We divided data into training and validation sets and assessed algorithm performance using sub-sets of the validation set.

RESULTS : We identified 119 torsion-confirmed cases and 331 torsion-absent cases. Of the torsion-confirmed cases, significant imaging differences were evident for girls younger than 1 year; these girls were then excluded from analysis, and 99 pediatric patients older than 1 year were included in our study. Among these 99, all variables demonstrated statistically significant differences between the torsion-confirmed and torsion-absent groups with P-values <0.005. Using any single variable to identify torsion provided only modest detection performance, with areas under the curve (AUC) for medialization, peripheral follicles, and absence of Doppler flow of 0.76±0.16, 0.66±0.14 and 0.82±0.14, respectively. The best decision tree using a combination of variables yielded an AUC of 0.96±0.07 and required knowledge of the presence of intra-ovarian flow, peripheral follicles, the volume of both ovaries, and the presence of cysts or masses.

CONCLUSION : Based on the largest series of pediatric ovarian torsion in the literature to date, we quantified sonographic features and used machine learning to create an algorithm to identify the presence of ovarian torsion - an algorithm that performs better than simple approaches relying on single features. Although complex combinations using multiple-interaction models provide slightly better performance, a clinically pragmatic decision tree can be employed to detect torsion, providing sensitivity levels of 95±14% and specificity of 92±2%.

Otjen Jeffrey P, Stanescu A Luana, Alessio Adam M, Parisi Marguerite T

2020-Jan-22

Algorithm, Children, Machine learning, Medialization, Ovary, Torsion, Ultrasound

General General

Speech enhancement method using deep learning approach for hearing-impaired listeners.

In Health informatics journal ; h5-index 25.0

A deep learning-based speech enhancement method is proposed to aid hearing-impaired listeners by improving speech intelligibility. The algorithm decomposes the noisy speech signal into frames (as features). Subsequently, a deep convolutional neural network is fed with decomposed noisy speech signal frames to produce frequency channel estimation. However, a higher signal-to-noise ratio information is contained in produced frequency channel estimation. Using this estimate, speech-dominated cochlear implant channels are taken to produce electrical stimulation. This process is the same as that of the conventional n-of-m cochlear implant coding strategies. To determine the speech-in-noise performance of 12 cochlear implant users, the fan and music sound applied are considered as background noises. Performance of the proposed algorithm is evaluated by considering these background noises. Low processing delay and reliable architecture are the best characteristics of the deep learning-based speech enhancement algorithm; hence, this can be suitably applied for all applications of hearing devices. Experimental results demonstrate that deep convolutional neural network approach appeared more promising than conventional approaches.

Khaleelur Rahiman P F, Jayanthi V S, Jayanthi A N

2020-Jan-23

cochlear implant, convolutional neural networks, impaired listener, speech intelligibility

oncology Oncology

Clinical implications of intratumor heterogeneity: challenges and opportunities.

In Journal of molecular medicine (Berlin, Germany) ; h5-index 0.0

In this review, we highlight the role of intratumoral heterogeneity, focusing on the clinical and biological ramifications this phenomenon poses. Intratumoral heterogeneity arises through complex genetic, epigenetic, and protein modifications that drive phenotypic selection in response to environmental pressures. Functionally, heterogeneity provides tumors with significant adaptability. This ranges from mutual beneficial cooperation between cells, which nurture features such as growth and metastasis, to the narrow escape and survival of clonal cell populations that have adapted to thrive under specific conditions such as hypoxia or chemotherapy. These dynamic intercellular interplays are guided by a Darwinian selection landscape between clonal tumor cell populations and the tumor microenvironment. Understanding the involved drivers and functional consequences of such tumor heterogeneity is challenging but also promises to provide novel insight needed to confront the problem of therapeutic resistance in tumors.

Ramón Y Cajal Santiago, Sesé Marta, Capdevila Claudia, Aasen Trond, De Mattos-Arruda Leticia, Diaz-Cano Salvador J, Hernández-Losa Javier, Castellví Josep

2020-Jan-22

Antitumor therapeutics, Artificial intelligence, Intratumor heterogeneity, Liquid biopsy

General General

Development of digital biomarkers for resting tremor and bradykinesia using a wrist-worn wearable device.

In NPJ digital medicine ; h5-index 0.0

Objective assessment of Parkinson's disease symptoms during daily life can help improve disease management and accelerate the development of new therapies. However, many current approaches require the use of multiple devices, or performance of prescribed motor activities, which makes them ill-suited for free-living conditions. Furthermore, there is a lack of open methods that have demonstrated both criterion and discriminative validity for continuous objective assessment of motor symptoms in this population. Hence, there is a need for systems that can reduce patient burden by using a minimal sensor setup while continuously capturing clinically meaningful measures of motor symptom severity under free-living conditions. We propose a method that sequentially processes epochs of raw sensor data from a single wrist-worn accelerometer by using heuristic and machine learning models in a hierarchical framework to provide continuous monitoring of tremor and bradykinesia. Results show that sensor derived continuous measures of resting tremor and bradykinesia achieve good to strong agreement with clinical assessment of symptom severity and are able to discriminate between treatment-related changes in motor states.

Mahadevan Nikhil, Demanuele Charmaine, Zhang Hao, Volfson Dmitri, Ho Bryan, Erb Michael Kelley, Patel Shyamal

2020

Biomarkers, Biomedical engineering

oncology Oncology

Imputation of Gene Expression Data in Blood Cancer and Its Significance in Inferring Biological Pathways.

In Frontiers in oncology ; h5-index 0.0

Purpose: Gene expression data generated from microarray technology is often analyzed for disease diagnostics and treatment. However, this data suffers with missing values that may lead to inaccurate findings. Since data capture is expensive, time consuming, and is required to be collected from subjects, it is worthwhile to recover missing values instead of re-collecting the data. In this paper, a novel but simple method, namely, DSNN (Doubly Sparse DCT domain with Nuclear Norm minimization) has been proposed for imputing missing values in microarray data. Extensive experiments including pathway enrichment have been carried out on four blood cancer dataset to validate the method as well as to establish the significance of imputation. Methods: A new method, namely, DSNN, was proposed for missing value imputation on gene expression data. Method was validated on four dataset, CLL, AML, MM (Spanish data), and MM (Indian data). All the dataset were downloaded from GEO repository. Missing values were introduced in the original data from 10 to 90% in steps of 10% because method validation requires ground truth. Quantitative results on normalized mean square error (NMSE) between the ground truth and imputed data were computed. To further validate and establish the significance of the proposed imputation method, two experiments were carried out on the data imputed with the proposed method, data imputed with the state-of-art methods, and data with missing values. In the first experiment, classification of normal vs. cancer subjects was carried out. In the second experiment, biological significance of imputation was ascertained by identifying top candidate tumor drivers using the existing state-of-the-art SPARROW algorithm, followed by gene list enrichment analysis on top candidate drivers. Results: Quantitative NMSE results of the DSNN method were compared with three state-of-the-art imputation methods. DSNN method was observed to perform better compared to these other methods both at high as well as low observable data. Experiment-1 demonstrated superior results on classification with imputation compared to that performed on missing data matrix as well as compared to classification on imputed data with existing methods. In experiment-2, cancer affected pathways were discovered with higher significance in the data imputed with the proposed method compared to those discovered with the missing data matrix. Conclusion: Missing value problem in microarray data is a serious problem and can adversely influence downstream analysis. A novel method, namely, DSNN is proposed for missing value imputation. The method is validated quantitatively on the application of classification and biologically by performing pathway enrichment analysis.

Farswan Akanksha, Gupta Anubha, Gupta Ritu, Kaur Gurvinder

2019

AML, CLL, MM, blood cancer, compressive sensing, gene enrichment analysis, machine learning, matrix imputation

oncology Oncology

DNA methylation markers in the diagnosis and prognosis of common leukemias.

In Signal transduction and targeted therapy ; h5-index 0.0

The ability to identify a specific type of leukemia using minimally invasive biopsies holds great promise to improve the diagnosis, treatment selection, and prognosis prediction of patients. Using genome-wide methylation profiling and machine learning methods, we investigated the utility of CpG methylation status to differentiate blood from patients with acute lymphocytic leukemia (ALL) or acute myelogenous leukemia (AML) from normal blood. We established a CpG methylation panel that can distinguish ALL and AML blood from normal blood as well as ALL blood from AML blood with high sensitivity and specificity. We then developed a methylation-based survival classifier with 23 CpGs for ALL and 20 CpGs for AML that could successfully divide patients into high-risk and low-risk groups, with significant differences in clinical outcome in each leukemia type. Together, these findings demonstrate that methylation profiles can be highly sensitive and specific in the accurate diagnosis of ALL and AML, with implications for the prediction of prognosis and treatment selection.

Jiang Hua, Ou Zhiying, He Yingyi, Yu Meixing, Wu Shaoqing, Li Gen, Zhu Jie, Zhang Ru, Wang Jiayi, Zheng Lianghong, Zhang Xiaohong, Hao Wenge, He Liya, Gu Xiaoqiong, Quan Qingli, Zhang Edward, Luo Huiyan, Wei Wei, Li Zhihuan, Zang Guangxi, Zhang Charlotte, Poon Tina, Zhang Daniel, Ziyar Ian, Zhang Run-Ze, Li Oulan, Cheng Linhai, Shimizu Taylor, Cui Xinping, Zhu Jian-Kang, Sun Xin, Zhang Kang

2020

Haematological cancer, Prognostic markers

General General

Prediction of human-virus protein-protein interactions through a sequence embedding-based machine learning method.

In Computational and structural biotechnology journal ; h5-index 0.0

The identification of human-virus protein-protein interactions (PPIs) is an essential and challenging research topic, potentially providing a mechanistic understanding of viral infection. Given that the experimental determination of human-virus PPIs is time-consuming and labor-intensive, computational methods are playing an important role in providing testable hypotheses, complementing the determination of large-scale interactome between species. In this work, we applied an unsupervised sequence embedding technique (doc2vec) to represent protein sequences as rich feature vectors of low dimensionality. Training a Random Forest (RF) classifier through a training dataset that covers known PPIs between human and all viruses, we obtained excellent predictive accuracy outperforming various combinations of machine learning algorithms and commonly-used sequence encoding schemes. Rigorous comparison with three existing human-virus PPI prediction methods, our proposed computational framework further provided very competitive and promising performance, suggesting that the doc2vec encoding scheme effectively captures context information of protein sequences, pertaining to corresponding protein-protein interactions. Our approach is freely accessible through our web server as part of our host-pathogen PPI prediction platform (http://zzdlab.com/InterSPPI/). Taken together, we hope the current work not only contributes a useful predictor to accelerate the exploration of human-virus PPIs, but also provides some meaningful insights into human-virus relationships.

Yang Xiaodi, Yang Shiping, Li Qinmengge, Wuchty Stefan, Zhang Ziding

2020

AC, Auto Covariance, ACC, Accuracy, AUC, area under the ROC curve, AUPRC, area under the PR curve, Adaboost, Adaptive Boosting, CT, Conjoint Triad, Doc2vec, Embedding, Human-virus interaction, LD, Local Descriptor, MCC, Matthews correlation coefficient, ML, machine learning, MLP, Multiple Layer Perceptron, MS, mass spectroscopy, Machine learning, PPIs, protein-protein interactions, PR, Precision-Recall, Prediction, Protein-protein interaction, RBF, radial basis function, RF, Random Forest, ROC, Receiver Operating Characteristic, SGD, stochastic gradient descent, SVM, Support Vector Machine, Y2H, yeast two-hybrid

General General

A Pretraining-Retraining Strategy of Deep Learning Improves Cell-Specific Enhancer Predictions.

In Frontiers in genetics ; h5-index 62.0

Deciphering the code of cis-regulatory element (CRE) is one of the core issues of today's biology. Enhancers are distal CREs and play significant roles in gene transcriptional regulation. Although identifications of enhancer locations across the whole genome [discriminative enhancer predictions (DEP)] is necessary, it is more important to predict in which specific cell or tissue types, they will be activated and functional [tissue-specific enhancer predictions (TSEP)]. Although existing deep learning models achieved great successes in DEP, they cannot be directly employed in TSEP because a specific cell or tissue type only has a limited number of available enhancer samples for training. Here, we first adopted a reported deep learning architecture and then developed a novel training strategy named "pretraining-retraining strategy" (PRS) for TSEP by decomposing the whole training process into two successive stages: a pretraining stage is designed to train with the whole enhancer data for performing DEP, and a retraining strategy is then designed to train with tissue-specific enhancer samples based on the trained pretraining model for making TSEP. As a result, PRS is found to be valid for DEP with an AUC of 0.922 and a GM (geometric mean) of 0.696, when testing on a larger-scale FANTOM5 enhancer dataset via a five-fold cross-validation. Interestingly, based on the trained pretraining model, a new finding is that only additional twenty epochs are needed to complete the retraining process on testing 23 specific tissues or cell lines. For TSEP tasks, PRS achieved a mean GM of 0.806 which is significantly higher than 0.528 of gkm-SVM, an existing mainstream method for CRE predictions. Notably, PRS is further proven superior to other two state-of-the-art methods: DEEP and BiRen. In summary, PRS has employed useful ideas from the domain of transfer learning and is a reliable method for TSEPs.

Niu Xiaohui, Yang Kun, Zhang Ge, Yang Zhiquan, Hu Xuehai

2019

deep learning, prediction, pretraining, retraining, tissue-specific enhancers

General General

A Novel Hybrid CNN-SVR for CRISPR/Cas9 Guide RNA Activity Prediction.

In Frontiers in genetics ; h5-index 62.0

Accurate prediction of guide RNA (gRNA) on-target efficacy is critical for effective application of CRISPR/Cas9 system. Although some machine learning-based and convolutional neural network (CNN)-based methods have been proposed, prediction accuracy remains to be improved. Here, firstly we improved architectures of current CNNs for predicting gRNA on-target efficacy. Secondly, we proposed a novel hybrid system which combines our improved CNN with support vector regression (SVR). This CNN-SVR system is composed of two major components: a merged CNN as the front-end for extracting gRNA feature and an SVR as the back-end for regression and predicting gRNA cleavage efficiency. We demonstrate that CNN-SVR can effectively exploit features interactions from feed-forward directions to learn deeper features of gRNAs and their corresponding epigenetic features. Experiments on commonly used datasets show that our CNN-SVR system outperforms available state-of-the-art methods in terms of prediction accuracy, generalization, and robustness. Source codes are available at https://github.com/Peppags/CNN-SVR.

Zhang Guishan, Dai Zhiming, Dai Xianhua

2019

CRISPR/Cas9, convolutional neural network, guide RNA, on-target, support vector regression

General General

A Hybrid Approach for Modeling Type 2 Diabetes Mellitus Progression.

In Frontiers in genetics ; h5-index 62.0

Type 2 Diabetes Mellitus (T2DM) is a chronic, progressive metabolic disorder characterized by hyperglycemia resulting from abnormalities in insulin secretion, insulin action, or both. It is associated with an increased risk of developing vascular complication of micro as well as macro nature. Because of its inconspicuous and heterogeneous character, the management of T2DM is very complex. Modeling physiological processes over time demonstrating the patient's evolving health condition is imperative to comprehending the patient's current status of health, projecting its likely dynamics and assessing the requisite care and treatment measures in future. Hidden Markov Model (HMM) is an effective approach for such prognostic modeling. However, the nature of the clinical setting, together with the format of the Electronic Medical Records (EMRs) data, in particular the sparse and irregularly sampled clinical data which is well understood to present significant challenges, has confounded standard HMM. In the present study, we proposed an approximation technique based on Newton's Divided Difference Method (NDDM) as a component with HMM to determine the risk of developing diabetes in an individual over different time horizons using irregular and sparsely sampled EMRs data. The proposed method is capable of exploiting available sequences of clinical measurements obtained from a longitudinal sample of patients for effective imputation and improved prediction performance. Furthermore, results demonstrated that the discrimination capability of our proposed method, in prognosticating diabetes risk, is superior to the standard HMM.

Perveen Sajida, Shahbaz Muhammad, Ansari Muhammad Sajjad, Keshavjee Karim, Guergachi Aziz

2019

hidden Markov model, machine learning, prognostic modelling, risk prediction, risk scoring, type 2 diabetes mellitus

General General

Predicting Brain Age of Healthy Adults Based on Structural MRI Parcellation Using Convolutional Neural Networks.

In Frontiers in neurology ; h5-index 0.0

Structural magnetic resonance imaging (MRI) studies have demonstrated that the brain undergoes age-related neuroanatomical changes not only regionally but also on the network level during the normal development and aging process. In recent years, many studies have focused on estimating age using structural MRI measurements. However, the age prediction effects on different structural networks remain unclear. In this study, we established age prediction models based on common structural networks using convolutional neural networks (CNN) with data from 1,454 healthy subjects aged 18-90 years. First, based on the reference map of CorticalParcellation_Yeo2011, we obtained structural network images for each subject, including images of the following: the frontoparietal network (FPN), the dorsal attention network (DAN), the default mode network (DMN), the somatomotor network (SMN), the ventral attention network (VAN), the visual network (VN), and the limbic network (LN). Then, we built a 3D CNN model for each structural network using a large training dataset (n = 1,303) and the predicted ages of the subjects in the test dataset (n = 151). Finally, we estimated the age prediction performance of CNN compared with Gaussian process regression (GPR) and relevance vector regression (RVR). The results of CNN showed that the FPN, DAN, and DMN exhibited the optimal age prediction accuracies with mean absolute errors (MAEs) of 5.55 years, 5.77 years, and 6.07 years, respectively, and the other four networks, i.e., the SMN, VAN, VN, and LN, tended to have larger MAEs of more than 8 years. With respect to GPR and RVR, the top three prediction accuracies were still from the FPN, DAN, and DMN; moreover, CNN made more precise predictions than GPR and RVR for these three networks. Our findings suggested that CNN has the optimal age prediction performance, and our age prediction model can be potentially used for brain disorder diagnosis according to age prediction differences.

Jiang Huiting, Lu Na, Chen Kewei, Yao Li, Li Ke, Zhang Jiacai, Guo Xiaojuan

2019

age prediction, convolutional neural networks, healthy subjects, machine learning, magnetic resonance imaging, structural network

General General

Current Challenges in Translational and Clinical fMRI and Future Directions.

In Frontiers in psychiatry ; h5-index 0.0

Translational neuroscience is an important field that brings together clinical praxis with neuroscience methods. In this review article, the focus will be on functional neuroimaging (fMRI) and its applicability in clinical fMRI studies. In the light of the "replication crisis," three aspects will be critically discussed: First, the fMRI signal itself, second, current fMRI praxis, and, third, the next generation of analysis strategies. Current attempts such as resting-state fMRI, meta-analyses, and machine learning will be discussed with their advantages and potential pitfalls and disadvantages. One major concern is that the fMRI signal shows substantial within- and between-subject variability, which affects the reliability of both task-related, but in particularly resting-state fMRI studies. Furthermore, the lack of standardized acquisition and analysis methods hinders the further development of clinical relevant approaches. However, meta-analyses and machine-learning approaches may help to overcome current shortcomings in the methods by identifying new, and yet hidden relationships, and may help to build new models on disorder mechanisms. Furthermore, better control of parameters that may have an influence on the fMRI signal and that can easily be controlled for, like blood pressure, heart rate, diet, time of day, might improve reliability substantially.

Specht Karsten

2019

BOLD (blood oxygenation level dependent) signal, clinical fMRI, fMRI—functional magnetic resonance imaging, psychiatry, reliability

General General

Prediction of Acquired Antimicrobial Resistance for Multiple Bacterial Species Using Neural Networks.

In mSystems ; h5-index 0.0

Machine learning has proven to be a powerful method to predict antimicrobial resistance (AMR) without using prior knowledge for selected bacterial species-antimicrobial combinations. To date, only species-specific machine learning models have been developed, and to the best of our knowledge, the inclusion of information from multiple species has not been attempted. The aim of this study was to determine the feasibility of including information from multiple bacterial species to predict AMR for an individual species, since this may make it easier to train and update resistance predictions for multiple species and may lead to improved predictions. Whole-genome sequence data and susceptibility profiles from 3,528 Mycobacterium tuberculosis, 1,694 Escherichia coli, 658 Salmonella enterica, and 1,236 Staphylococcus aureus isolates were included. We developed machine learning models trained by the features of the PointFinder and ResFinder programs detected to predict binary (susceptible/resistant) AMR profiles. We tested four feature representation methods to determine the most efficient way for introducing features into the models. When training the model only on the Mycobacterium tuberculosis isolates, high prediction performances were obtained for the six AMR profiles included. By adding information on ciprofloxacin from the additional 3,588 isolates, there was no reduction in performance for the other antimicrobials but an increased performance for ciprofloxacin AMR profile prediction for Mycobacterium tuberculosis and Escherichia coli In conclusion, the species-independent models can predict multi-AMR profiles for multiple species without losing any robustness.IMPORTANCE Machine learning is a proven method to predict AMR; however, the performance of any machine learning model depends on the quality of the input data. Therefore, we evaluated different methods of representing information about mutations as well as mobilizable genes, so that the information can serve as input for a robust model. We combined data from multiple bacterial species in order to develop species-independent machine learning models that can predict resistance profiles for multiple antimicrobials and species with high performance.

Aytan-Aktug D, Clausen P T L C, Bortolaia V, Aarestrup F M, Lund O

2020-Jan-21

AMR, antimicrobial resistance, machine learning, neural networks

Radiology Radiology

An fMRI-based neural marker for migraine without aura.

In Neurology ; h5-index 107.0

OBJECTIVE : To identify and validate an fMRI-based neural marker for migraine without aura (MwoA) and to examine its association with treatment response.

METHODS : We conducted cross-sectional studies with resting-state fMRI data from 230 participants and machine learning analyses. In studies 1 through 3, we identified, cross-validated, independently validated, and cross-sectionally validated an fMRI-based neural marker for MwoA. In study 4, we assessed the relationship between the neural marker and treatment responses in migraineurs who received a 4-week real or sham acupuncture treatment, or were waitlisted, in a registered clinical trial.

RESULTS : In study 1 (n = 116), we identified a neural marker with abnormal functional connectivity within the visual, default mode, sensorimotor, and frontal-parietal networks that could discriminate migraineurs from healthy controls (HCs) with 93% sensitivity and 89% specificity. In study 2 (n = 38), we investigated the generalizability of the marker by applying it to an independent cohort of migraineurs and HCs and achieved 84% sensitivity and specificity. In study 3 (n = 76), we verified the specificity of the marker with new datasets of migraineurs and patients with other chronic pain disorders (chronic low back pain and fibromyalgia) and demonstrated 78% sensitivity and 76% specificity for discriminating migraineurs from nonmigraineurs. In study 4 (n = 116), we found that the changes in the marker responses showed significant correlation with the changes in headache frequency in response to real acupuncture.

CONCLUSION : We identified an fMRI-based neural marker that captures distinct characteristics of MwoA and can link disease pattern changes to brain changes.

Tu Yiheng, Zeng Fang, Lan Lei, Li Zhengjie, Maleki Nasim, Liu Bo, Chen Jun, Wang Chenchen, Park Joel, Lang Courtney, Yujie Gao, Liu Mailan, Fu Zening, Zhang Zhiguo, Liang Fanrong, Kong Jian

2020-Jan-21

General General

Three-dimensional multi-source localization of underwater objects using convolutional neural networks for artificial lateral lines.

In Journal of the Royal Society, Interface ; h5-index 0.0

This research focuses on the signal processing required for a sensory system that can simultaneously localize multiple moving underwater objects in a three-dimensional (3D) volume by simulating the hydrodynamic flow caused by these objects. We propose a method for localization in a simulated setting based on an established hydrodynamic theory founded in fish lateral line organ research. Fish neurally concatenate the information of multiple sensors to localize sources. Similarly, we use the sampled fluid velocity via two parallel lateral lines to perform source localization in three dimensions in two steps. Using a convolutional neural network, we first estimate a two-dimensional image of the probability of a present source. Then we determine the position of each source, via an automated iterative 3D-aware algorithm. We study various neural network architectural designs and different ways of presenting the input to the neural network; multi-level amplified inputs and merged convolutional streams are shown to improve the imaging performance. Results show that the combined system can exhibit adequate 3D localization of multiple sources.

Wolf Ben J, van de Wolfshaar Jos, van Netten Sietse M

2020-Jan

convolutional neural network, hydrodynamic imaging, inverse problem, lateral line, sensor array, source localization

Public Health Public Health

Investigating the use of data-driven artificial intelligence in computerised decision support systems for health and social care: A systematic review.

In Health informatics journal ; h5-index 25.0

There is growing interest in the potential of artificial intelligence to support decision-making in health and social care settings. There is, however, currently limited evidence of the effectiveness of these systems. The aim of this study was to investigate the effectiveness of artificial intelligence-based computerised decision support systems in health and social care settings. We conducted a systematic literature review to identify relevant randomised controlled trials conducted between 2013 and 2018. We searched the following databases: MEDLINE, EMBASE, CINAHL, PsycINFO, Web of Science, Cochrane Library, ASSIA, Emerald, Health Business Fulltext Elite, ProQuest Public Health, Social Care Online, and grey literature sources. Search terms were conceptualised into three groups: artificial intelligence-related terms, computerised decision support -related terms, and terms relating to health and social care. Terms within groups were combined using the Boolean operator OR, and groups were combined using the Boolean operator AND. Two reviewers independently screened studies against the eligibility criteria and two independent reviewers extracted data on eligible studies onto a customised sheet. We assessed the quality of studies through the Critical Appraisal Skills Programme checklist for randomised controlled trials. We then conducted a narrative synthesis. We identified 68 hits of which five studies satisfied the inclusion criteria. These studies varied substantially in relation to quality, settings, outcomes, and technologies. None of the studies was conducted in social care settings, and three randomised controlled trials showed no difference in patient outcomes. Of these, one investigated the use of Bayesian triage algorithms on forced expiratory volume in 1 second (FEV1) and health-related quality of life in lung transplant patients. Another investigated the effect of image pattern recognition on neonatal development outcomes in pregnant women, and another investigated the effect of the Kalman filter technique for warfarin dosing suggestions on time in therapeutic range. The remaining two randomised controlled trials, investigating computer vision and neural networks on medication adherence and the impact of learning algorithms on assessment time of patients with gestational diabetes, showed statistically significant and clinically important differences to the control groups receiving standard care. However, these studies tended to be of low quality lacking detailed descriptions of methods and only one study used a double-blind design. Although the evidence of effectiveness of data-driven artificial intelligence to support decision-making in health and social care settings is limited, this work provides important insights on how a meaningful evidence base in this emerging field needs to be developed going forward. It is unlikely that any single overall message surrounding effectiveness will emerge - rather effectiveness of interventions is likely to be context-specific and calls for inclusion of a range of study designs to investigate mechanisms of action.

Cresswell Kathrin, Callaghan Margaret, Khan Sheraz, Sheikh Zakariya, Mozaffar Hajar, Sheikh Aziz

2020-Jan-22

artificial intelligence, decision support systems, narrative synthesis, randomised controlled trial, systematic review

General General

Machine Learning in Thermodynamics: Prediction of Activity Coefficients by Matrix Completion.

In The journal of physical chemistry letters ; h5-index 129.0

Activity coefficients, which are a measure of the non-ideality of liquid mixtures, are a key property in chemical engineering with relevance to modeling chemical and phase equilibria as well as transport processes. Although experimental data on thousands of binary mixtures are available, prediction methods are needed to calculate activity coefficients of many relevant mixtures that have not been explored to-date. In this report, we propose a probabilistic matrix factorization model for predicting the activity coefficients of arbitrary binary mixtures. Although no physical descriptors for the considered components were used, our method outperforms the state-of-the-art method that has been refined over three decades while requiring much less training effort. This opens perspectives to novel methods for predicting physico-chemical properties of binary mixtures with the potential to revolutionize modeling and simulation in chemical engineering.

Jirasek Fabian, Alves Rodrigo A S, Damay Julie, Vandermeulen Robert A, Bamler Robert, Bortz Michael, Mandt Stephan, Kloft Marius, Hasse Hans

2020-Jan-21

General General

Smart Tactile Sensing Systems Based on Embedded CNN Implementations.

In Micromachines ; h5-index 0.0

Embedding machine learning methods into the data decoding units may enable the extraction of complex information making the tactile sensing systems intelligent. This paper presents and compares the implementations of a convolutional neural network model for tactile data decoding on various hardware platforms. Experimental results show comparable classification accuracy of 90.88% for Model 3, overcoming similar state-of-the-art solutions in terms of time inference. The proposed implementation achieves a time inference of 1.2 ms while consuming around 900 μ J. Such an embedded implementation of intelligent tactile data decoding algorithms enables tactile sensing systems in different application domains such as robotics and prosthetic devices.

Alameh Mohamad, Abbass Yahya, Ibrahim Ali, Valle Maurizio

2020-Jan-18

convolutional neural network, embedding intelligence, tactile sensing systems

General General

Steady-State Levels of Cytokinins and Their Derivatives May Serve as a Unique Classifier of Arabidopsis Ecotypes.

In Plants (Basel, Switzerland) ; h5-index 0.0

We determined steady-state (basal) endogenous levels of three plant hormones (abscisic acid, cytokinins and indole-3-acetic acid) in a collection of thirty different ecotypes of Arabidopsis that represent a broad genetic variability within this species. Hormone contents were analysed separately in plant shoots and roots after 21 days of cultivation on agar plates in a climate-controlled chamber. Using advanced statistical and machine learning methods, we tested if basal hormonal levels can be considered a unique ecotype-specific classifier. We also explored possible relationships between hormone levels and the prevalent environmental conditions in the site of origin for each ecotype. We found significant variations in basal hormonal levels and their ratios in both root and shoot among the ecotypes. We showed the prominent position of cytokinins (CK) among the other hormones. We found the content of CK and CK metabolites to be a reliable ecotype-specific identifier. Correlation with the mean temperature at the site of origin and the large variation in basal hormonal levels suggest that the high variability may potentially be in response to environmental factors. This study provides a starting point for ecotype-specific genetic maps of the CK metabolic and signalling network to explore its contribution to the adaptation of plants to local environmental conditions.

Samsonová Zuzana, Kiran Nagavalli S, Novák Ondřej, Spyroglou Ioannis, Skalák Jan, Hejátko Jan, Gloser Vít

2020-Jan-17

abscisic acid, cytokinin glucosides, cytokinin metabolism, cytokinins, indole-3-acetic acid, single nucleotide polymorphism

General General

A reference library for assigning protein subcellular localizations by image-based machine learning.

In The Journal of cell biology ; h5-index 0.0

Confocal micrographs of EGFP fusion proteins localized at key cell organelles in murine and human cells were acquired for use as subcellular localization landmarks. For each of the respective 789,011 and 523,319 optically validated cell images, morphology and statistical features were measured. Machine learning algorithms using these features permit automated assignment of the localization of other proteins and dyes in both cell types with very high accuracy. Automated assignment of subcellular localizations for model tail-anchored proteins with randomly mutated C-terminal targeting sequences allowed the discovery of motifs responsible for targeting to mitochondria, endoplasmic reticulum, and the late secretory pathway. Analysis of directed mutants enabled refinement of these motifs and characterization of protein distributions in within cellular subcompartments.

Schormann Wiebke, Hariharan Santosh, Andrews David W

2020-Mar-02

Ophthalmology Ophthalmology

Mutational landscape screening of methylene tetrahydrofolate reductase to predict homocystinuria associated variants: An integrative computational approach.

In Mutation research ; h5-index 0.0

Methylene tetrahydrofolate reductase (MTHFR) is a flavoprotein, involved in one-carbon pathway and is responsible for folate and homocysteine metabolism. Regulation of MTHFR is pivotal for maintaining the cellular concentrations of methionine and SAM (S-adenosyl methionine) which are essential for the synthesis of nucleotides and amino acids, respectively. Therefore, mutations in MTHFR leads to its dysfunction resulting in conditions like homocystinuria, cardiovascular diseases, and neural tube defects in infants. Among these conditions, homocystinuria has been highly explored, as it manifests ocular disorders, cognitive disorders and skeletal abnormalities. Hence, in this study, we intend to explore the mutational landscape of human MTHFR isoform-1 (h.MTHFR-1) to decipher the most pathogenic variants pertaining to homocystinuria. Thus, a multilevel stringent prioritization of non-synonymous mutations in h.MTHFR-1 by integrative machine learning approaches was implemented to delineate highly deleterious variants based on its pathogenicity, impact on structural stability and functionality. Subsequently, extended molecular dynamics simulations and molecular docking studies were also integrated in order to prioritize the mutations that perturbs structural stability and functionality of h.MTHFR-1. In addition, displacement of Loop (Arg157-Tyr174) and helix α9 (His263-Ser272) involved in open/closed conformation of substrate binding domain were also probed to confirm the functional loss. On juxtaposed analysis, it was inferred that among 126 missense mutations screened, along with known pathogenic mutations (H127 T, A222 V, T227 M, F257 V and G387D) predicted that W500C, P254S and D585 N variants could be potentially driving homocystinuria. Thus, uncovering the prospects for inclusion of these mutations in diagnostic panels based on further experimental validations.

Nagarajan Hemavathy, Narayanaswamy Saratha, Vetrivel Umashankar

2020-Jan-16

Homocystinuria, MTHFR, Molecular docking, Molecular dynamics simulation, Molecular modelling, Mutation, SNPs

Ophthalmology Ophthalmology

DIABETIC RETINOPATHY, CLASSIFIED USING THE LESION-AWARE DEEP LEARNING SYSTEM, PREDICTS DIABETIC END-STAGE RENAL DISEASE IN CHINESE PATIENTS.

In Endocrine practice : official journal of the American College of Endocrinology and the American Association of Clinical Endocrinologists ; h5-index 0.0

Aims: To characterize the relationship between diabetic retinopathy (DR) and diabetic nephropathy (DN) in Chinese patients and to determine whether the severity of DR predicts end-stage renal disease (ESRD). Methods: Bilateral fundic photographs of 91 Chinese type 2 diabetic patients with biopsy-confirmed DN, not in ESRD stage, were obtained at the time of renal biopsy in this longitudinal study. The baseline severity of DR was determined using the Lesion-aware Deep Learning System (RetinalNET) in an open framework for deep learning and was graded using the Early Treatment Diabetic Retinopathy Study Severity Scale. Cox proportional hazard models were used to estimate the hazard ratio (HR) for the effect of the severity of diabetic retinopathy on ESRD. Results: During a median follow-up of 15 months, 25 patients progressed to ESRD. The severity of retinopathy at the time of biopsy was a prognostic factor for progression to ESRD (HR 2.18, 95% confidence interval (CI) 1.05-4.53, P= 0.04). At baseline, more severe retinopathy was associated with poor renal function, and more severe glomerular lesions. However, 30% patients with mild retinopathy and severe glomerular lesions had higher low-density lipoprotein-cholesterol and more severe proteinuria than those with mild glomerular lesions. Additionally, 3% of patients with severe retinopathy and mild glomerular changes were more likely to have had diabetes a long time than those with severe glomerular lesions. Conclusions: Although the severity of DR predicted diabetic ESRD in patients with T2DM and DN, the severities of DR and DN were not always consistent, especially in patients with mild retinopathy or microalbuminuria.

Zhao Lijun, Ren Honghong, Zhang Junlin, Cao Yana, Wang Yiting, Meng Dan, Wu Yucheng, Zhang Rui, Zou Yutong, Xu Huan, Li Lin, Zhang Jie, Cooper Mark E, Tong Nanwei, Liu Fang

2020-Jan-22

artificial intelligence, diabetes mellitus, diabetic nephropathy, diabetic retinopathy, renal biopsy

General General

Can Artificial Intelligence Improve the Management of Pneumonia.

In Journal of clinical medicine ; h5-index 0.0

The use of artificial intelligence (AI) to support clinical medical decisions is a rather promising concept. There are two important factors that have driven these advances: the availability of data from electronic health records (EHR) and progress made in computational performance. These two concepts are interrelated with respect to complex mathematical functions such as machine learning (ML) or neural networks (NN). Indeed, some published articles have already demonstrated the potential of these approaches in medicine. When considering the diagnosis and management of pneumonia, the use of AI and chest X-ray (CXR) images primarily have been indicative of early diagnosis, prompt antimicrobial therapy, and ultimately, better prognosis. Coupled with this is the growing research involving empirical therapy and mortality prediction, too. Maximizing the power of NN, the majority of studies have reported high accuracy rates in their predictions. As AI can handle large amounts of data and execute mathematical functions such as machine learning and neural networks, AI can be revolutionary in supporting the clinical decision-making processes. In this review, we describe and discuss the most relevant studies of AI in pneumonia.

Chumbita Mariana, Cillóniz Catia, Puerta-Alcalde Pedro, Moreno-García Estela, Sanjuan Gemma, Garcia-Pouton Nicole, Soriano Alex, Torres Antoni, Garcia-Vidal Carolina

2020-Jan-17

artificial intelligence, pneumonia

Dermatology Dermatology

Artificial Neural Networks allow Response Prediction in Squamous Cell Carcinoma of the Scalp Treated with Radiotherapy.

In Journal of the European Academy of Dermatology and Venereology : JEADV ; h5-index 0.0

BACKGROUND : Epithelial neoplasms of the scalp account for approximately 2% of all skin cancers and for about 10-20% of the tumors affecting the head and neck area. Radiotherapy is suggested for localized cutaneous squamous cell carcinomas (cSCC) without lymph node involvement, multiple or extensive lesions, for patients refusing surgery, for patients with a poor general medical status, as adjuvant for incompletely excised lesions and/or as a palliative treatment. To date, prognostic risk factors in scalp cSCC patients are poorly characterized.

OBJECTIVE : To identify patterns of patients with higher risk of post-radiotherapy recurrence METHODS: A retrospective observational study was performed on scalp cSCC patients with histological diagnosis who underwent conventional radiotherapy (50-120 kV) (between 1996 and 2008, follow-up from 1 to 140 months, median 14 months). Out of the 79 enrolled patients, 22(27.8%) had previously undergone a surgery. Two months after radiotherapy, 66(83.5%) patients achieved a complete remission, 6(7.6%) a partial remission, whereas 2(2.5%) proved non-responsive to the treatment and 5 cases were lost to follow-up. Demographical and clinical data were preliminarily analyzed with classical descriptive statistics and with principal component analysis. All data were then re-evaluated with a machine learning-based approach using a 4th generation artificial neural networks(ANNs)-based algorithm.

RESULTS : ANNs analysis revealed four scalp cSCC profiles among radiotherapy responsive patients, not previously described: namely, 1) stage T2 cSCC type, aged 70-80 years; 2) frontal cSCC type, aged <70 years; 3) non-recurrent nodular or nodulo-ulcerated, stage T3 cSCC type, of the vertex and treated with >60 Grays (Gy); and 4) flat, occipital, stage T1 cSCC type, treated with 50-59 Gy. The model uncovering these four predictive profiles displayed 85.7% sensitivity, 97.6% specificity, and 91.7% overall accuracy.

CONCLUSIONS : Patient profiling/phenotyping with machine learning may be a new, helpful method to stratify patients with scalp cSCCs who may benefit from a RT-treatment.

Damiani G, Grossi E, Berti E, Conic R Rz, Radhakrishna U, Pacifico A, Bragazzi N L, Piccinno R, Linder D

2020-Jan-22

Squamous cell carcinoma, artificial neural networks, machine learning, precision medicine, radiotherapy, scalp

General General

Assessment of Mandibular Movement Monitoring With Machine Learning Analysis for the Diagnosis of Obstructive Sleep Apnea.

In JAMA network open ; h5-index 0.0

Importance : Given the high prevalence of obstructive sleep apnea (OSA), there is a need for simpler and automated diagnostic approaches.

Objective : To evaluate whether mandibular movement (MM) monitoring during sleep coupled with an automated analysis by machine learning is appropriate for OSA diagnosis.

Design, Setting, and Participants : Diagnostic study of adults undergoing overnight in-laboratory polysomnography (PSG) as the reference method compared with simultaneous MM monitoring at a sleep clinic in an academic institution (Sleep Laboratory, Centre Hospitalier Universitaire Université Catholique de Louvain Namur Site Sainte-Elisabeth, Namur, Belgium). Patients with suspected OSA were enrolled from July 5, 2017, to October 31, 2018.

Main Outcomes and Measures : Obstructive sleep apnea diagnosis required either evoking signs or symptoms or related medical or psychiatric comorbidities coupled with a PSG-derived respiratory disturbance index (PSG-RDI) of at least 5 events/h. A PSG-RDI of at least 15 events/h satisfied the diagnosis criteria even in the absence of associated symptoms or comorbidities. Patients who did not meet these criteria were classified as not having OSA. Agreement analysis and diagnostic performance were assessed by Bland-Altman plot comparing PSG-RDI and the Sunrise system RDI (Sr-RDI) with diagnosis threshold optimization via receiver operating characteristic curves, allowing for evaluation of the device sensitivity and specificity in detecting OSA at 5 events/h and 15 events/h.

Results : Among 376 consecutive adults with suspected OSA, the mean (SD) age was 49.7 (13.2) years, the mean (SD) body mass index was 31.0 (7.1), and 207 (55.1%) were men. Reliable agreement was found between PSG-RDI and Sr-RDI in patients without OSA (n = 46; mean difference, 1.31; 95% CI, -1.05 to 3.66 events/h) and in patients with OSA with a PSG-RDI of at least 5 events/h with symptoms (n = 107; mean difference, -0.69; 95% CI, -3.77 to 2.38 events/h). An Sr-RDI underestimation of -11.74 (95% CI, -20.83 to -2.67) events/h in patients with OSA with a PSG-RDI of at least 15 events/h was detected and corrected by optimization of the Sunrise system diagnostic threshold. The Sr-RDI showed diagnostic capability, with areas under the receiver operating characteristic curve of 0.95 (95% CI, 0.92-0.96) and 0.93 (95% CI, 0.90-0.93) for corresponding PSG-RDIs of 5 events/h and 15 events/h, respectively. At the 2 optimal cutoffs of 7.63 events/h and 12.65 events/h, Sr-RDI had accuracy of 0.92 (95% CI, 0.90-0.94) and 0.88 (95% CI, 0.86-0.90) as well as posttest probabilities of 0.99 (95% CI, 0.99-0.99) and 0.89 (95% CI, 0.88-0.91) at PSG-RDIs of at least 5 events/h and at least 15 events/h, respectively, corresponding to positive likelihood ratios of 14.86 (95% CI, 9.86-30.12) and 5.63 (95% CI, 4.92-7.27), respectively.

Conclusions and Relevance : Automatic analysis of MM patterns provided reliable performance in RDI calculation. The use of this index in OSA diagnosis appears to be promising.

Pépin Jean-Louis, Letesson Clément, Le-Dong Nhat Nam, Dedave Antoine, Denison Stéphane, Cuthbert Valérie, Martinot Jean-Benoît, Gozal David

2020-Jan-03

Surgery Surgery

Prediction of Pulmonary to Systemic Flow Ratio in Patients With Congenital Heart Disease Using Deep Learning-Based Analysis of Chest Radiographs.

In JAMA cardiology ; h5-index 0.0

Importance : Chest radiography is a useful noninvasive modality to evaluate pulmonary blood flow status in patients with congenital heart disease. However, the predictive value of chest radiography is limited by the subjective and qualitive nature of the interpretation. Recently, deep learning has been used to analyze various images, but it has not been applied to analyzing chest radiographs in such patients.

Objective : To develop and validate a quantitative method to predict the pulmonary to systemic flow ratio from chest radiographs using deep learning.

Design, Setting, and Participants : This retrospective observational study included 1031 cardiac catheterizations performed for 657 patients from January 1, 2005, to April 30, 2019, at a tertiary center. Catheterizations without the Fick-derived pulmonary to systemic flow ratio or chest radiography performed within 1 month before catheterization were excluded. Seventy-eight patients (100 catheterizations) were randomly assigned for evaluation. A deep learning model that predicts the pulmonary to systemic flow ratio from chest radiographs was developed using the method of transfer learning.

Main Outcomes and Measures : Whether the model can predict the pulmonary to systemic flow ratio from chest radiographs was evaluated using the intraclass correlation coefficient and Bland-Altman analysis. The diagnostic concordance rate was compared with 3 certified pediatric cardiologists. The diagnostic performance for a high pulmonary to systemic flow ratio of 2.0 or more was evaluated using cross tabulation and a receiver operating characteristic curve.

Results : The study included 1031 catheterizations in 657 patients (522 males [51%]; median age, 3.4 years [interquartile range, 1.2-8.6 years]), in whom the mean (SD) Fick-derived pulmonary to systemic flow ratio was 1.43 (0.95). Diagnosis included congenital heart disease in 1008 catheterizations (98%). The intraclass correlation coefficient for the Fick-derived and deep learning-derived pulmonary to systemic flow ratio was 0.68, the log-transformed bias was 0.02, and the log-transformed precision was 0.12. The diagnostic concordance rate of the deep learning model was significantly higher than that of the experts (correctly classified 64 of 100 vs 49 of 100 chest radiographs; P = .02 [McNemar test]). For detecting a high pulmonary to systemic flow ratio, the sensitivity of the deep learning model was 0.47, the specificity was 0.95, and the area under the receiver operating curve was 0.88.

Conclusions and Relevance : The present investigation demonstrated that deep learning-based analysis of chest radiographs predicted the pulmonary to systemic flow ratio in patients with congenital heart disease. These findings suggest that the deep learning-based approach may confer an objective and quantitative evaluation of chest radiographs in the congenital heart disease clinic.

Toba Shuhei, Mitani Yoshihide, Yodoya Noriko, Ohashi Hiroyuki, Sawada Hirofumi, Hayakawa Hidetoshi, Hirayama Masahiro, Futsuki Ayano, Yamamoto Naoki, Ito Hisato, Konuma Takeshi, Shimpo Hideto, Takao Motoshi

2020-Jan-22

General General

Tissue-guided LASSO for prediction of clinical drug response using preclinical samples.

In PLoS computational biology ; h5-index 0.0

Prediction of clinical drug response (CDR) of cancer patients, based on their clinical and molecular profiles obtained prior to administration of the drug, can play a significant role in individualized medicine. Machine learning models have the potential to address this issue but training them requires data from a large number of patients treated with each drug, limiting their feasibility. While large databases of drug response and molecular profiles of preclinical in-vitro cancer cell lines (CCLs) exist for many drugs, it is unclear whether preclinical samples can be used to predict CDR of real patients. We designed a systematic approach to evaluate how well different algorithms, trained on gene expression and drug response of CCLs, can predict CDR of patients. Using data from two large databases, we evaluated various linear and non-linear algorithms, some of which utilized information on gene interactions. Then, we developed a new algorithm called TG-LASSO that explicitly integrates information on samples' tissue of origin with gene expression profiles to improve prediction performance. Our results showed that regularized regression methods provide better prediction performance. However, including the network information or common methods of including information on the tissue of origin did not improve the results. On the other hand, TG-LASSO improved the predictions and distinguished resistant and sensitive patients for 7 out of 13 drugs. Additionally, TG-LASSO identified genes associated with the drug response, including known targets and pathways involved in the drugs' mechanism of action. Moreover, genes identified by TG-LASSO for multiple drugs in a tissue were associated with patient survival. In summary, our analysis suggests that preclinical samples can be used to predict CDR of patients and identify biomarkers of drug sensitivity and survival.

Huang Edward W, Bhope Ameya, Lim Jing, Sinha Saurabh, Emad Amin

2020-Jan

Radiology Radiology

Image Quality and Lesion Detection on Deep Learning Reconstruction and Iterative Reconstruction of Submillisievert Chest and Abdominal CT.

In AJR. American journal of roentgenology ; h5-index 0.0

OBJECTIVE. The objective of this study was to compare image quality and clinically significant lesion detection on deep learning reconstruction (DLR) and iterative reconstruction (IR) images of submillisievert chest and abdominopelvic CT. MATERIALS AND METHODS. Our prospective multiinstitutional study included 59 adult patients (33 women, 26 men; mean age ± SD, 65 ± 12 years old; mean body mass index [weight in kilograms divided by the square of height in meters] = 27 ± 5) who underwent routine chest (n = 22; 16 women, six men) and abdominopelvic (n = 37; 17 women, 20 men) CT on a 640-MDCT scanner (Aquilion ONE, Canon Medical Systems). All patients gave written informed consent for the acquisition of low-dose (LD) CT (LDCT) after a clinically indicated standard-dose (SD) CT (SDCT). The SDCT series (120 kVp, 164-644 mA) were reconstructed with IR (adaptive iterative dose reduction [AIDR] 3D, Canon Medical Systems), and the LDCT (100 kVp, 120 kVp; 30-50 mA) were reconstructed with filtered back-projection (FBP), IR (AIDR 3D and forward-projected model-based iterative reconstruction solution [FIRST], Canon Medical Systems), and DLR (Advanced Intelligent Clear-IQ Engine [AiCE], Canon Medical Systems). Four subspecialty-trained radiologists first read all LD image sets and then compared them side-by-side with SD AIDR 3D images in an independent, randomized, and blinded fashion. Subspecialty radiologists assessed image quality of LDCT images on a 3-point scale (1 = unacceptable, 2 = suboptimal, 3 = optimal). Descriptive statistics were obtained, and the Wilcoxon sign rank test was performed. RESULTS. Mean volume CT dose index and dose-length product for LDCT (2.1 ± 0.8 mGy, 49 ± 13mGy·cm) were lower than those for SDCT (13 ± 4.4 mGy, 567 ± 249 mGy·cm) (p < 0.0001). All 31 clinically significant abdominal lesions were seen on SD AIDR 3D and LD DLR images. Twenty-five, 18, and seven lesions were detected on LD AIDR 3D, LD FIRST, and LD FBP images, respectively. All 39 pulmonary nodules detected on SD AIDR 3D images were also noted on LD DLR images. LD DLR images were deemed acceptable for interpretation in 97% (35/37) of abdominal and 95-100% (21-22/22) of chest LDCT studies (p = 0.2-0.99). The LD FIRST, LD AIDR 3D, and LD FBP images had inferior image quality compared with SD AIDR 3D images (p < 0.0001). CONCLUSION. At submillisievert chest and abdominopelvic CT doses, DLR enables image quality and lesion detection superior to commercial IR and FBP images.

Singh Ramandeep, Digumarthy Subba R, Muse Victorine V, Kambadakone Avinash R, Blake Michael A, Tabari Azadeh, Hoi Yiemeng, Akino Naruomi, Angel Erin, Madan Rachna, Kalra Mannudeep K

2020-Jan-22

abdomen CT, chest CT, deep learning, image reconstruction, radiation dose

General General

Analyze Informant-Based Questionnaire for The Early Diagnosis of Senile Dementia Using Deep Learning.

In IEEE journal of translational engineering in health and medicine ; h5-index 0.0

OBJECTIVE : This paper proposes a multiclass deep learning method for the classification of dementia using an informant-based questionnaire.

METHODS : A deep neural network classification model based on Keras framework is proposed in this paper. To evaluate the advantages of our proposed method, we compared the performance of our model with industry-standard machine learning approaches. We enrolled 6,701 individuals, which were randomly divided into training data sets (6030 participants) and test data sets (671 participants). We evaluated each diagnostic model in the test set using accuracy, precision, recall, and F1-Score.

RESULTS : Compared with the seven conventional machine learning algorithms, the DNN showed higher stability and achieved the best accuracy with 0.88, which also showed good results for identifying normal (F1-score = 0.88), mild cognitive impairment (MCI) (F1-score = 0.87), very mild dementia (VMD) (F1-score = 0.77) and Severe dementia (F1-score = 0.94).

CONCLUSION : The deep neural network (DNN) classification model can effectively help doctors accurately screen patients who have normal cognitive function, mild cognitive impairment (MCI), very mild dementia (VMD), mild dementia (Mild), moderate dementia (Moderate), and severe dementia (Severe).

Zhu Fubao, Li Xiaonan, Mcgonigle Daniel, Tang Haipeng, He Zhuo, Zhang Chaoyang, Hung Guang-Uei, Chiu Pai-Yi, Zhou Weihua

2020

Dementia, deep neural network, information gain, machine learning

oncology Oncology

Application of deep learning to the classification of uterine cervical squamous epithelial lesion from colposcopy images combined with HPV types.

In Oncology letters ; h5-index 0.0

The aim of the present study was to explore the feasibility of using deep learning, such as artificial intelligence (AI), to classify cervical squamous epithelial lesions (SILs) from colposcopy images combined with human papilloma virus (HPV) types. Among 330 patients who underwent colposcopy and biopsy performed by gynecological oncologists, a total of 253 patients with confirmed HPV typing tests were enrolled in the present study. Of these patients, 210 were diagnosed with high-grade SIL (HSIL) and 43 were diagnosed with low-grade SIL (LSIL). An original AI classifier with a convolutional neural network catenating with an HPV tensor was developed and trained. The accuracy of the AI classifier and gynecological oncologists was 0.941 and 0.843, respectively. The AI classifier performed better compared with the oncologists, although not significantly. The sensitivity, specificity, positive predictive value, negative predictive value, Youden's J index and the area under the receiver-operating characteristic curve ± standard error for AI colposcopy combined with HPV types and pathological results were 0.956 (43/45), 0.833 (5/6), 0.977 (43/44), 0.714 (5/7), 0.789 and 0.963±0.026, respectively. Although further study is required, the clinical use of AI for the classification of HSIL/LSIL by both colposcopy and HPV type may be feasible.

Miyagi Yasunari, Takehara Kazuhiro, Nagayasu Yoko, Miyake Takahito

2020-Feb

HPV, artificial intelligence, cervical intraepithelial neoplasia, colposcopy, deep learning

General General

Biosensors to monitor MS activity.

In Multiple sclerosis (Houndmills, Basingstoke, England) ; h5-index 0.0

Advances in wearable and wireless biosensing technology pave the way for a brave new world of novel multiple sclerosis (MS) outcome measures. Our current tools for examining patients date back to the 19th century and while invaluable to the neurologist invite accompaniment from these new technologies and artificial intelligence (AI) analytical methods. While the most common biosensor tool used in MS publications to date is the accelerometer, the landscape is changing quickly with multi-sensor applications, electrodermal sensors, and wireless radiofrequency waves. Some caution is warranted to ensure novel outcomes have clear clinical relevance and stand-up to the rigors of reliability, reproducibility, and precision, but the ultimate implementation of biosensing in the MS clinical setting is inevitable.

Graves Jennifer S, Montalban Xavier

2020-Jan-22

Multiple sclerosis, biosensors, digital health, wearable devices

General General

The impact of chemoinformatics on drug discovery in the pharmaceutical industry.

In Expert opinion on drug discovery ; h5-index 34.0

Introduction: Even though there have been substantial advances in our understanding of biological systems, research in drug discovery is only just now beginning to utilize this type of information. The single-target paradigm, which exemplifies the reductionist approach, remains a mainstay of drug research today. A deeper view of the complexity involved in drug discovery is necessary to advance on this field.Areas covered: This perspective provides a summary of research areas where cheminformatics has played a key role in drug discovery, including of the available resources as well as a personal perspective of the challenges still faced in the field.Expert opinion: Although great strides have been made in the handling and analysis of biological and pharmacological data, more must be done to link the data to biological pathways. This is crucial if one is to understand how drugs modify disease phenotypes, although this will involve a shift from the single drug/single target paradigm that remains a mainstay of drug research. Moreover, such a shift would require an increased awareness of the role of physiology in the mechanism of drug action, which will require the introduction of new mathematical, computer, and biological methods for chemoinformaticians to be trained in.

Martinez-Mayorga Karina, Madariaga-Mazon Abraham, Medina-Franco José L, Maggiora Gerald

2020-Jan-22

Chemoinformatics, artificial intelligence, big data, molecular modeling, polypharmacology, polyspecificity

Radiology Radiology

Artificial Intelligence: reshaping the practice of radiological sciences in the 21st century.

In The British journal of radiology ; h5-index 0.0

Advances in computing hardware and software platforms have led to the recent resurgence in artificial intelligence (AI) touching almost every aspect of our daily lives by its capability for automating complex tasks or providing superior predictive analytics. AI applications are currently spanning many diverse fields from economics to entertainment, to manufacturing, as well as medicine. Since modern AI's inception decades ago, practitioners in radiological sciences have been pioneering its development and implementation in medicine, particularly in areas related to diagnostic imaging and therapy. In this anniversary article, we embark on a journey to reflect on the learned lessons from past AI's chequered history. We further summarize the current status of AI in radiological sciences, highlighting, with examples, its impressive achievements and effect on re-shaping the practice of medical imaging and radiotherapy in the areas of computer-aided detection, diagnosis, prognosis, and decision support. Moving beyond the commercial hype of AI into reality, we discuss the current challenges to overcome, for AI to achieve its promised hope of providing better precision healthcare for each patient while reducing cost burden on their families and the society at large.

El Naqa Issam, Haider Masoom A, Giger Maryellen L, Ten Haken Randall K

2020-Feb-01

General General

Modelling Training Adaptation in Swimming Using Artificial Neural Network Geometric Optimisation.

In Sports (Basel, Switzerland) ; h5-index 0.0

This study aims to model training adaptation using Artificial Neural Network (ANN) geometric optimisation. Over 26 weeks, 38 swimmers recorded their training and recovery data on a web platform. Based on these data, ANN geometric optimisation was used to model and graphically separate adaptation from maladaptation (to training). Geometric Activity Performance Index (GAPI), defined as the ratio of the adaptation to the maladaptation area, was introduced. The techniques of jittering and ensemble modelling were used to reduce overfitting of the model. Correlation (Spearman rank) and independence (Blomqvist β) tests were run between GAPI and performance measures to check the relevance of the collected parameters. Thirteen out of 38 swimmers met the prerequisites for the analysis and were included in the modelling. The GAPI based on external load (distance) and internal load (session-Rating of Perceived Exertion) showed the strongest correlation with performance measures. ANN geometric optimisation seems to be a promising technique to model training adaptation and GAPI could be an interesting numerical surrogate to track during a season.

Carrard Justin, Kloucek Petr, Gojanovic Boris

2020-Jan-16

machine learning, online tool, training monitoring

General General

Migrating from partial least squares discriminant analysis to artificial neural networks: a comparison of functionally equivalent visualisation and feature contribution tools using jupyter notebooks.

In Metabolomics : Official journal of the Metabolomic Society ; h5-index 0.0

INTRODUCTION : Metabolomics data is commonly modelled multivariately using partial least squares discriminant analysis (PLS-DA). Its success is primarily due to ease of interpretation, through projection to latent structures, and transparent assessment of feature importance using regression coefficients and Variable Importance in Projection scores. In recent years several non-linear machine learning (ML) methods have grown in popularity but with limited uptake essentially due to convoluted optimisation and interpretation. Artificial neural networks (ANNs) are a non-linear projection-based ML method that share a structural equivalence with PLS, and as such should be amenable to equivalent optimisation and interpretation methods.

OBJECTIVES : We hypothesise that standardised optimisation, visualisation, evaluation and statistical inference techniques commonly used by metabolomics researchers for PLS-DA can be migrated to a non-linear, single hidden layer, ANN.

METHODS : We compared a standardised optimisation, visualisation, evaluation and statistical inference techniques workflow for PLS with the proposed ANN workflow. Both workflows were implemented in the Python programming language. All code and results have been made publicly available as Jupyter notebooks on GitHub.

RESULTS : The migration of the PLS workflow to a non-linear, single hidden layer, ANN was successful. There was a similarity in significant metabolites determined using PLS model coefficients and ANN Connection Weight Approach.

CONCLUSION : We have shown that it is possible to migrate the standardised PLS-DA workflow to simple non-linear ANNs. This result opens the door for more widespread use and to the investigation of transparent interpretation of more complex ANN architectures.

Mendez Kevin M, Broadhurst David I, Reinke Stacey N

2020-Jan-21

Artificial neural networks, Jupyter, Machine learning, Metabolomics, Partial least squares, Variable importance in projection

General General

Machine learning for the prediction of sepsis: a systematic review and meta-analysis of diagnostic test accuracy.

In Intensive care medicine ; h5-index 86.0

PURPOSE : Early clinical recognition of sepsis can be challenging. With the advancement of machine learning, promising real-time models to predict sepsis have emerged. We assessed their performance by carrying out a systematic review and meta-analysis.

METHODS : A systematic search was performed in PubMed, Embase.com and Scopus. Studies targeting sepsis, severe sepsis or septic shock in any hospital setting were eligible for inclusion. The index test was any supervised machine learning model for real-time prediction of these conditions. Quality of evidence was assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology, with a tailored Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) checklist to evaluate risk of bias. Models with a reported area under the curve of the receiver operating characteristic (AUROC) metric were meta-analyzed to identify strongest contributors to model performance.

RESULTS : After screening, a total of 28 papers were eligible for synthesis, from which 130 models were extracted. The majority of papers were developed in the intensive care unit (ICU, n = 15; 54%), followed by hospital wards (n = 7; 25%), the emergency department (ED, n = 4; 14%) and all of these settings (n = 2; 7%). For the prediction of sepsis, diagnostic test accuracy assessed by the AUROC ranged from 0.68-0.99 in the ICU, to 0.96-0.98 in-hospital and 0.87 to 0.97 in the ED. Varying sepsis definitions limit pooling of the performance across studies. Only three papers clinically implemented models with mixed results. In the multivariate analysis, temperature, lab values, and model type contributed most to model performance.

CONCLUSION : This systematic review and meta-analysis show that on retrospective data, individual machine learning models can accurately predict sepsis onset ahead of time. Although they present alternatives to traditional scoring systems, between-study heterogeneity limits the assessment of pooled results. Systematic reporting and clinical implementation studies are needed to bridge the gap between bytes and bedside.

Fleuren Lucas M, Klausch Thomas L T, Zwager Charlotte L, Schoonmade Linda J, Guo Tingjie, Roggeveen Luca F, Swart Eleonora L, Girbes Armand R J, Thoral Patrick, Ercole Ari, Hoogendoorn Mark, Elbers Paul W G

2020-Jan-21

Machine learning, Meta-analysis, Prediction, Sepsis, Septic shock, Systematic review

Surgery Surgery

Transfer learning radiomics based on multimodal ultrasound imaging for staging liver fibrosis.

In European radiology ; h5-index 62.0

OBJECTIVES : To propose a transfer learning (TL) radiomics model that efficiently combines the information from gray scale and elastogram ultrasound images for accurate liver fibrosis grading.

METHODS : Totally 466 patients undergoing partial hepatectomy were enrolled, including 401 with chronic hepatitis B and 65 without fibrosis pathologically. All patients received elastography and got liver stiffness measurement (LSM) 2-3 days before surgery. We proposed a deep convolutional neural network by TL to analyze images of gray scale modality (GM) and elastogram modality (EM). The TL process was used for liver fibrosis classification by Inception-V3 network which pretrained on ImageNet. The diagnostic performance of TL and non-TL was compared. The value of single modalities, including GM and EM alone, and multimodalities, including GM + LSM and GM + EM, was evaluated and compared with that of LSM and serological indexes. Receiver operating characteristic curve analysis was performed to calculate the optimal area under the curve (AUC) for classifying fibrosis of S4, ≥ S3, and ≥ S2.

RESULTS : TL in GM and EM demonstrated higher diagnostic accuracy than non-TL, with significantly higher AUCs (all p < .01). Single-modal GM and EM both performed better than LSM and serum indexes (all p < .001). Multimodal GM + EM was the most accurate prediction model (AUCs are 0.950, 0.932, and 0.930 for classifying S4, ≥ S3, and ≥ S2, respectively) compared with GM + LSM, GM and EM alone, LSM, and biomarkers (all p < .05).

CONCLUSIONS : Liver fibrosis can be staged by a transfer learning modal based on the combination of gray scale and elastogram ultrasound images, with excellent performance.

KEY POINTS : • Transfer learning consists in applying to a specific deep learning algorithm that pretrained on another relevant problem, expected to reduce the risk of overfitting due to insufficient medical images. • Liver fibrosis can be staged by transfer learning radiomics with excellent performance. • The most accurate prediction model of transfer learning by Inception-V3 network is the combination of gray scale and elastogram ultrasound images.

Xue Li-Yun, Jiang Zhuo-Yun, Fu Tian-Tian, Wang Qing-Min, Zhu Yu-Li, Dai Meng, Wang Wen-Ping, Yu Jin-Hua, Ding Hong

2020-Jan-21

Deep learning, Elasticity imaging techniques, Hepatitis B, Liver cirrhosis

Radiology Radiology

Marginal radiomics features as imaging biomarkers for pathological invasion in lung adenocarcinoma.

In European radiology ; h5-index 62.0

OBJECTIVES : Lung adenocarcinomas which manifest as ground-glass nodules (GGNs) have different degrees of pathological invasion and differentiating among them is critical for treatment. Our goal was to evaluate the addition of marginal features to a baseline radiomics model on computed tomography (CT) images to predict the degree of pathologic invasiveness.

METHODS : We identified 236 patients from two cohorts (training, n = 189; validation, n = 47) who underwent surgery for GGNs. All GGNs were pathologically confirmed as adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IA). The regions of interest were semi-automatically annotated and 40 radiomics features were computed. We selected features using L1-norm regularization to build the baseline radiomics model. Additional marginal features were developed using the cumulative distribution function (CDF) of intratumoral intensities. An improved model was built combining the baseline model with CDF features. Three classifiers were tested for both models.

RESULTS : The baseline radiomics model included five features and resulted in an average area under the curve (AUC) of 0.8419 (training) and 0.9142 (validation) for the three classifiers. The second model, with the additional marginal features, resulted in AUCs of 0.8560 (training) and 0.9581 (validation). All three classifiers performed better with the added features. The support vector machine showed the most performance improvement (AUC improvement = 0.0790) and the best performance was achieved by the logistic classifier (validation AUC = 0.9825).

CONCLUSION : Our novel marginal features, when combined with a baseline radiomics model, can help differentiate IA from AIS and MIA on preoperative CT scans.

KEY POINTS : • Our novel marginal features could improve the existing radiomics model to predict the degree of pathologic invasiveness in lung adenocarcinoma.

Cho Hwan-Ho, Lee Geewon, Lee Ho Yun, Park Hyunjin

2020-Jan-21

Classification, Lung adenocarcinoma, Machine learning, Quantitative evaluation, Tumor microenvironment

Dermatology Dermatology

[New optical examination procedures for the diagnosis of skin diseases].

In Der Hautarzt; Zeitschrift fur Dermatologie, Venerologie, und verwandte Gebiete ; h5-index 0.0

BACKGROUND : Since the establishment of dermoscopy as a routine examination procedure in dermatology, the spectrum of noninvasive, optical devices has further expanded. In difficult-to-diagnose clinical cases, these systems may support dermatologists to arrive at a correct diagnosis without the need for a surgical biopsy.

OBJECTIVE : To give an overview about technical background, indications and diagnostic performance regarding four new optical procedures: reflectance confocal microscopy, in vivo multiphoton tomography, dermatofluoroscopy, and systems based on image analysis by artificial intelligence (AI).

MATERIALS AND METHODS : This article is based on a selective review of the literature, as well as the authors' personal experience from clinical studies relevant for market approval of the devices.

RESULTS : In contrast to standard histopathological slides with vertical cross sections, reflectance confocal microscopy and in vivo multiphoton tomography allow for "optical biopsies" with horizontal cross sections. Dermatofluoroscopy and AI-based image analyzers provide a numerical score, which helps to correctly classify a skin lesion. The presented new optical procedures may be applied for the diagnosis of skin cancer as well as inflammatory skin diseases.

CONCLUSION : The presented optical procedures provide valuable additional information that supports dermatologists in making the correct diagnosis. However, a surgical biopsy followed by dermatohistopathological examination remains the diagnostic gold standard in dermatology.

Sies K, Winkler J K, Zieger M, Kaatz M, Haenssle H A

2020-Jan-21

Confocal laser scan microscopy, Dermatofluoroscopy, Multiphoton tomography, Noninvasive diagnostics, Screening

Surgery Surgery

[Artificial intelligence in general and visceral surgery].

In Der Chirurg; Zeitschrift fur alle Gebiete der operativen Medizen ; h5-index 0.0

Artificial intelligence procedures will find special fields of application also in general and visceral surgery. These will not only be limited to intraoperative surgical applications but also extend to perioperative processes, education and training as well as to future scientific developments. Major impulses are to be expected in decision support systems, cognitive collaborative interventional environments and in evidence-based knowledge acquisition models; however, the implementation into the daily practice not only requires profound insights into the field of informatics and computer science but also a comprehensive knowledge of the surgical domain. Accordingly, the future implementation of artificial intelligence in surgery requires a new culture of collaboration between surgeons and researchers/computer scientists.

Wilhelm D, Ostler D, Müller-Stich B, Lamadé W, Stier A, Feußner H

2020-Jan-21

Anamnesis, Decision support, Digitalization, Surgical data science, Surgineering

Surgery Surgery

[Digitalization and use of artificial intelligence in microvascular reconstructive facial surgery].

In Der Chirurg; Zeitschrift fur alle Gebiete der operativen Medizen ; h5-index 0.0

BACKGROUND : When using digitalization and artificial intelligence (AI), large amounts of data (big data) are produced, which can be processed by computers and used in the field of microvascular-reconstructive craniomaxillofacial surgery (CMFS).

OBJECTIVE : The aim of this article is to summarize current applications of digitalized medicine and AI in microvascular reconstructive CMFS.

MATERIAL AND METHODS : Review of frequent applications of digital medicine for microvascular CMFS reconstruction, focusing on digital planning, navigation, robotics and potential applications with AI.

RESULTS : The broadest utilization of medical digitalization is in the virtual planning of microvascular transplants, individualized implants and template-guided reconstruction. Navigation is commonly used for ablative tumor surgery but less frequently in reconstructions. Robotics are mainly employed in the transoral approach for tumor surgery of the hypopharynx, whereas the use of AI is still limited even if possible applications would be automated virtual planning and monitoring systems.

CONCLUSION : The use of digitalized methods and AI are adjuncts to microvascular reconstruction. Automatization approaches and simplification of technologies will provide such applications to a broader clientele in the future; however, in CMFS, robotic-assisted resections and automated flap monitoring are not yet the standard of care.

Goetze E, Thiem D G E, Gielisch M, Al-Nawas B, Kämmerer P W

2020-Jan-21

Automatization, CAD/CAM planning, Individualized surgery, Microvascular reconstruction, Navigated surgery

General General

Solving the transcription start site identification problem with ADAPT-CAGE: a Machine Learning algorithm for the analysis of CAGE data.

In Scientific reports ; h5-index 158.0

Cap Analysis of Gene Expression (CAGE) has emerged as a powerful experimental technique for assisting in the identification of transcription start sites (TSSs). There is strong evidence that CAGE also identifies capping sites along various other locations of transcribed loci such as splicing byproducts, alternative isoforms and capped molecules overlapping introns and exons. We present ADAPT-CAGE, a Machine Learning framework which is trained to distinguish between CAGE signal derived from TSSs and transcriptional noise. ADAPT-CAGE provides highly accurate experimentally derived TSSs on a genome-wide scale. It has been specifically designed for flexibility and ease-of-use by only requiring aligned CAGE data and the underlying genomic sequence. When compared to existing algorithms, ADAPT-CAGE exhibits improved performance on every benchmark that we designed based on both annotation- and experimentally-driven strategies. This performance boost brings ADAPT-CAGE in the spotlight as a computational framework that is able to assist in the refinement of gene regulatory networks, the incorporation of accurate information of gene expression regulators and alternative promoter usage in both physiological and pathological conditions.

Georgakilas Georgios K, Perdikopanis Nikos, Hatzigeorgiou Artemis

2020-Jan-21

General General

GOODD, a global dataset of more than 38,000 georeferenced dams.

In Scientific data ; h5-index 0.0

By presenting the most comprehensive GlObal geOreferenced Database of Dams to date containing more than 38,000 dams as well as their associated catchments, we enable new and improved global analyses of the impact of dams on society and environment and the impact of environmental change (for example land use and climate change) on the catchments of dams. This paper presents the development of the global database through systematic digitisation of satellite imagery globally by a small team and highlights the various approaches to bias estimation and to validation of the data. The following datasets are provided (a) raw digitised coordinates for the location of dam walls (that may be useful for example in machine learning approaches to dam identification from imagery), (b) a global vector file of the watershed for each dam.

Mulligan Mark, van Soesbergen Arnout, Sáenz Leonardo

2020-Jan-21

Public Health Public Health

Acceptability of artificial intelligence (AI)-enabled chatbots, video consultations and live webchats as online platforms for sexual health advice.

In BMJ sexual & reproductive health ; h5-index 0.0

OBJECTIVES : Sexual and reproductive health (SRH) services are undergoing a digital transformation. This study explored the acceptability of three digital services, (i) video consultations via Skype, (ii) live webchats with a health advisor and (iii) artificial intelligence (AI)-enabled chatbots, as potential platforms for SRH advice.

METHODS : A pencil-and-paper 33-item survey was distributed in three clinics in Hampshire, UK for patients attending SRH services. Logistic regressions were performed to identify the correlates of acceptability.

RESULTS : In total, 257 patients (57% women, 50% aged <25 years) completed the survey. As the first point of contact, 70% preferred face-to-face consultations, 17% telephone consultation, 10% webchats and 3% video consultations. Most would be willing to use video consultations (58%) and webchat facilities (73%) for ongoing care, but only 40% found AI chatbots acceptable. Younger age (<25 years) (OR 2.43, 95% CI 1.35 to 4.38), White ethnicity (OR 2.87, 95% CI 1.30 to 6.34), past sexually transmitted infection (STI) diagnosis (OR 2.05, 95% CI 1.07 to 3.95), self-reported STI symptoms (OR 0.58, 95% CI 0.34 to 0.97), smartphone ownership (OR 16.0, 95% CI 3.64 to 70.5) and the preference for a SRH smartphone application (OR 1.95, 95% CI 1.13 to 3.35) were associated with video consultations, webchats or chatbots acceptability.

CONCLUSIONS : Although video consultations and webchat services appear acceptable, there is currently little support for SRH chatbots. The findings demonstrate a preference for human interaction in SRH services. Policymakers and intervention developers need to ensure that digital transformation is not only cost-effective but also acceptable to users, easily accessible and equitable to all populations using SRH services.

Nadarzynski Tom, Bayley Jake, Llewellyn Carrie, Kidsley Sally, Graham Cynthia Ann

2020-Jan-21

AI, digital, eHealth, mHealth

oncology Oncology

Profiling Cell Type Abundance and Expression in Bulk Tissues with CIBERSORTx.

In Methods in molecular biology (Clifton, N.J.) ; h5-index 0.0

CIBERSORTx is a suite of machine learning tools for the assessment of cellular abundance and cell type-specific gene expression patterns from bulk tissue transcriptome profiles. With this framework, single-cell or bulk-sorted RNA sequencing data can be used to learn molecular signatures of distinct cell types from a small collection of biospecimens. These signatures can then be repeatedly applied to characterize cellular heterogeneity from bulk tissue transcriptomes without physical cell isolation. In this chapter, we provide a detailed primer on CIBERSORTx and demonstrate its capabilities for high-throughput profiling of cell types and cellular states in normal and neoplastic tissues.

Steen Chloé B, Liu Chih Long, Alizadeh Ash A, Newman Aaron M

2020

Cellular heterogeneity, Deconvolution, Digital cytometry, Gene expression, Tumor microenvironment, scRNA-seq

General General

Artificial Intelligence and Polyp Detection.

In Current treatment options in gastroenterology ; h5-index 0.0

PURPOSE OF REVIEW : This review highlights the history, recent advances, and ongoing challenges of artificial intelligence (AI) technology in colonic polyp detection.

RECENT FINDINGS : Hand-crafted AI algorithms have recently given way to convolutional neural networks with the ability to detect polyps in real-time. The first randomized controlled trial comparing an AI system to standard colonoscopy found a 9% increase in adenoma detection rate, but the improvement was restricted to polyps smaller than 10 mm and the results need validation. As this field rapidly evolves, important issues to consider include standardization of outcomes, dataset availability, real-world applications, and regulatory approval. AI has shown great potential for improving colonic polyp detection while requiring minimal training for endoscopists. The question of when AI will enter endoscopic practice depends on whether the technology can be integrated into existing hardware and an assessment of its added value for patient care.

Hoerter Nicholas, Gross Seth A, Liang Peter S

2020-Jan-21

Artificial intelligence, Colonic neoplasm, Computer-aided detection, Convolutional neural network, Machine learning

General General

Innovative Identification of Substance Use Predictors: Machine Learning in a National Sample of Mexican Children.

In Prevention science : the official journal of the Society for Prevention Research ; h5-index 0.0

Machine learning provides a method of identifying factors that discriminate between substance users and non-users potentially improving our ability to match need with available prevention services within context with limited resources. Our aim was to utilize machine learning to identify high impact factors that best discriminate between substance users and non-users among a national sample (N = 52,171) of Mexican children (i.e., 5th, 6th grade; Mage = 10.40, SDage = 0.82). Participants reported information on individual factors (e.g., gender, grade, religiosity, sensation seeking, self-esteem, perceived risk of substance use), socioecological factors (e.g., neighborhood quality, community type, peer influences, parenting), and lifetime substance use (i.e., alcohol, tobacco, marijuana, inhalant). Findings suggest that best friend and father illicit substance use (i.e., drugs other than tobacco or alcohol) and respondent sex (i.e., boys) were consistent and important discriminators between children who tried substances and those that did not. Friend cigarette use was a strong predictor of lifetime use of alcohol, tobacco, and marijuana. Friend alcohol use was specifically predictive of lifetime alcohol and tobacco use. Perceived danger of engaging in frequent alcohol and inhalant use predicted lifetime alcohol and inhalant use. Overall, findings suggest that best friend and father illicit substance use and respondent's sex appear to be high impact screening questions associated with substance initiation during childhood for Mexican youths. These data help practitioners narrow prevention efforts by helping identify youth at highest risk.

Vázquez Alejandro L, Domenech Rodríguez Melanie M, Barrett Tyson S, Schwartz Sarah, Amador Buenabad Nancy G, Bustos Gamiño Marycarmen N, Gutiérrez López María de Lourdes, Villatoro Velázquez Jorge A

2020-Jan-20

Children, Machine learning, Mexico, Prevention, Risk factors, Substance use

Surgery Surgery

Ultrasound needle segmentation and trajectory prediction using excitation network.

In International journal of computer assisted radiology and surgery ; h5-index 0.0

PURPOSE : Ultrasound (US)-guided percutaneous kidney biopsy is a challenge for interventionists as US artefacts prevent accurate viewing of the biopsy needle tip. Automatic needle tracking and trajectory prediction can increase operator confidence in performing biopsies, reduce procedure time, minimize the risk of inadvertent biopsy bleedings, and enable future image-guided robotic procedures.

METHODS : In this paper, we propose a tracking-by-segmentation model with spatial and channel "Squeeze and Excitation" (scSE) for US needle detection and trajectory prediction. We adopt a light deep learning architecture (e.g. LinkNet) as our segmentation baseline network and integrate the scSE module to learn spatial information for better prediction. The proposed model is trained with the US images of anonymized kidney biopsy clips from 8 patients. The contour is obtained using the border-following algorithm and area calculated using Green formula. Trajectory prediction is made by extrapolating from the smallest bounding box that can capture the contour.

RESULTS : We train and test our model on a total of 996 images extracted from 102 short videos at a rate of 3 frames per second from each video. A set of 794 images is used for training and 202 images for testing. Our model has achieved IOU of 41.01%, dice accuracy of 56.65%, F1-score of 36.61%, and root-mean-square angle error of 13.3[Formula: see text]. We are thus able to predict and extrapolate the trajectory of the biopsy needle with decent accuracy for interventionists to better perform biopsies.

CONCLUSION : Our novel model combining LinkNet and scSE shows a promising result for kidney biopsy application, which implies potential to other similar ultrasound-guided biopsies that require needle tracking and trajectory prediction.

Lee Jia Yi, Islam Mobarakol, Woh Jing Ru, Washeem T S Mohamed, Ngoh Lee Ying Clara, Wong Weng Kin, Ren Hongliang

2020-Jan-20

Concurrent Spatial and Channel “Squeeze and Excitation”, LinkNet, Minimally invasive surgery, Needle tracking, Ultrasound imaging

General General

Should we have a right to refuse diagnostics and treatment planning by artificial intelligence?

In Medicine, health care, and philosophy ; h5-index 0.0

Should we be allowed to refuse any involvement of artificial intelligence (AI) technology in diagnosis and treatment planning? This is the relevant question posed by Ploug and Holm in a recent article in Medicine, Health Care and Philosophy. In this article, I adhere to their conclusions, but not necessarily to the rationale that supports them. First, I argue that the idea that we should recognize this right on the basis of a rational interest defence is not plausible, unless we are willing to judge each patient's ideology or religion. Instead, I consider that the right must be recognized by virtue of values such as social pluralism or individual autonomy. Second, I point out that the scope of such a right should be limited at least under three circumstances: (1) if it is against a physician's obligation to not cause unnecessary harm to a patient or to not provide futile treatment, (2) in cases where the costs of implementing this right are too high, or (3) if recognizing the right would deprive other patients of their own rights to adequate health care.

de Miguel Beriain Iñigo

2020-Jan-20

Artificial intelligence, Health care, Patients autonomy, Right to refuse treatment

Ophthalmology Ophthalmology

The SUSTech-SYSU dataset for automatically segmenting and classifying corneal ulcers.

In Scientific data ; h5-index 0.0

Corneal ulcer is a common ophthalmic symptom. Segmentation algorithms are needed to identify and quantify corneal ulcers from ocular staining images. Developments of such algorithms have been obstructed by a lack of high quality datasets (the ocular staining images and the corresponding gold-standard ulcer segmentation labels), especially for supervised learning based segmentation algorithms. In such context, we prepare a dataset containing 712 ocular staining images and the associated segmentation labels of flaky corneal ulcers. In addition to segmentation labels for flaky corneal ulcers, we also provide each image with three-fold class labels: firstly, each image has a label in terms of its general ulcer pattern; secondly, each image has a label in terms of its specific ulcer pattern; thirdly, each image has a label indicating its ulcer severity degree. This dataset not only provides an excellent opportunity for investigating the accuracy and reliability of different segmentation and classification algorithms for corneal ulcers, but also advances the development of new supervised learning based algorithms especially those in the deep learning framework.

Deng Lijie, Lyu Junyan, Huang Haixiang, Deng Yuqing, Yuan Jin, Tang Xiaoying

2020-Jan-20

General General

Fuzzy reinforcement learning based intelligent classifier for power transformer faults.

In ISA transactions ; h5-index 0.0

In this work a fuzzy reinforcement learning (RL) based intelligent classifier for power transformer incipient faults is proposed. Fault classifiers proposed till date have low identification accuracy and do not identify all types of transformer faults. Herein, an attempt has been made to design an adaptive, intelligent transformer fault classifier that progressively learns to identify faults on-line with high accuracy for all fault types. In the proposed approach, dissolved gas analysis (DGA) data of oil samples collected from real power transformers (and from credible sources) has been used, which serves as input to a fuzzy RL based classifier. Typically, classification accuracy is heavily dependent on the number of input variables chosen. This has been resolved by using the J48 algorithm to select 8 most appropriate input variables from the 24 variables obtained using DGA. Proposed fuzzy RL approach achieves a fault identification accuracy of 99.7%, which is significantly higher than other contemporary soft computing based identifiers. Experimental results and comparison with other state-of-the-art approaches, highlights superiority and efficacy of the proposed fuzzy RL technique for transformer fault classification.

Malik Hasmat, Sharma Rajneesh, Mishra Sukumar

2020-Jan-11

Artificial intelligence, Decision tree, Dissolved gases analysis, Fault diagnosis, Fuzzy Q learning (FQL)

Dermatology Dermatology

Crowdsourcing in health and medical research: a systematic review.

In Infectious diseases of poverty ; h5-index 31.0

BACKGROUND : Crowdsourcing is used increasingly in health and medical research. Crowdsourcing is the process of aggregating crowd wisdom to solve a problem. The purpose of this systematic review is to summarize quantitative evidence on crowdsourcing to improve health.

METHODS : We followed Cochrane systematic review guidance and systematically searched seven databases up to September 4th 2019. Studies were included if they reported on crowdsourcing and related to health or medicine. Studies were excluded if recruitment was the only use of crowdsourcing. We determined the level of evidence associated with review findings using the GRADE approach.

RESULTS : We screened 3508 citations, accessed 362 articles, and included 188 studies. Ninety-six studies examined effectiveness, 127 examined feasibility, and 37 examined cost. The most common purposes were to evaluate surgical skills (17 studies), to create sexual health messages (seven studies), and to provide layperson cardio-pulmonary resuscitation (CPR) out-of-hospital (six studies). Seventeen observational studies used crowdsourcing to evaluate surgical skills, finding that crowdsourcing evaluation was as effective as expert evaluation (low quality). Four studies used a challenge contest to solicit human immunodeficiency virus (HIV) testing promotion materials and increase HIV testing rates (moderate quality), and two of the four studies found this approach saved money. Three studies suggested that an interactive technology system increased rates of layperson initiated CPR out-of-hospital (moderate quality). However, studies analyzing crowdsourcing to evaluate surgical skills and layperson-initiated CPR were only from high-income countries. Five studies examined crowdsourcing to inform artificial intelligence projects, most often related to annotation of medical data. Crowdsourcing was evaluated using different outcomes, limiting the extent to which studies could be pooled.

CONCLUSIONS : Crowdsourcing has been used to improve health in many settings. Although crowdsourcing is effective at improving behavioral outcomes, more research is needed to understand effects on clinical outcomes and costs. More research is needed on crowdsourcing as a tool to develop artificial intelligence systems in medicine.

TRIAL REGISTRATION : PROSPERO: CRD42017052835. December 27, 2016.

Wang Cheng, Han Larry, Stein Gabriella, Day Suzanne, Bien-Gund Cedric, Mathews Allison, Ong Jason J, Zhao Pei-Zhen, Wei Shu-Fang, Walker Jennifer, Chou Roger, Lee Amy, Chen Angela, Bayus Barry, Tucker Joseph D

2020-Jan-20

Challenge contest, Crowdsourcing, Health, Innovation, Medicine, Systematic review

General General

A prognostic analysis method for non-small cell lung cancer based on the computed tomography radiomics.

In Physics in medicine and biology ; h5-index 0.0

OBJECTIVE : In order to assist doctors in arranging the postoperative treatments and re-examinations for non-small cell lung cancer (NSCLC) patients, this study was initiated to explore a prognostic analysis method for non-small cell lung cancer based on computed tomography (CT) radiomics.

METHODS : The data of 173 NSCLC patients were collected retrospectively and the clinically meaningful 3-year survival was used as the predictive limit to predict the patient's prognosis survival time range. Firstly, lung tumors were segmented and the radiomics features were extracted. Secondly, the feature weighting algorithm was used to screen and optimize the extracted original feature data. Then, the selected feature data combining with the prognosis survival of patients were used to train machine learning classification models. Finally, a prognostic survival analysis model and radiomics prognostic factors were obtained to predict the prognosis survival time range of NSCLC patients.

RESULTS : The classification accuracy rate under cross-validation was up to 88.7% in the prognosis survival analysis model. When verifying on an independent data set, the model also yielded a high prediction accuracy which is up to 79.6%. Inverse different moment, lobulation sign and angular second moment were NSCLC prognostic factors based on radiomics.

CONCLUSIONS : This study proved that CT radiomics features could effectively assist doctors to make more accurate prognosis survival prediction for NSCLC patients, so as to help doctors to optimize treatment and re-examination for NSCLC patients to extend their survival time.

Wang Xu, Duan Huihong, Li Xiaobing, Ye Xiaodan, Huang Guang, Nie Sheng-Dong

2020-Jan-21

CT radiomics features, Non-small cell lung cancer, Prognostic factors, Prognostic survival prediction model

Cardiology Cardiology

1D-CADCapsNet: One dimensional deep capsule networks for coronary artery disease detection using ECG signals.

In Physica medica : PM : an international journal devoted to the applications of physics to medicine and biology : official journal of the Italian Association of Biomedical Physics (AIFB) ; h5-index 0.0

PURPOSE : Cardiovascular disease (CVD) is a leading cause of death globally. Electrocardiogram (ECG), which records the electrical activity of the heart, has been used for the diagnosis of CVD. The automated and robust detection of CVD from ECG signals plays a significant role for early and accurate clinical diagnosis. The purpose of this study is to provide automated detection of coronary artery disease (CAD) from ECG signals using capsule networks (CapsNet).

METHODS : Deep learning-based approaches have become increasingly popular in computer aided diagnosis systems. Capsule networks are one of the new promising approaches in the field of deep learning. In this study, we used 1D version of CapsNet for the automated detection of coronary artery disease (CAD) on two second (95,300) and five second-long (38,120) ECG segments. These segments are obtained from 40 normal and 7 CAD subjects. In the experimental studies, 5-fold cross validation technique is employed to evaluate performance of the model.

RESULTS : The proposed model, which is named as 1D-CADCapsNet, yielded a promising 5-fold diagnosis accuracy of 99.44% and 98.62% for two- and five-second ECG signal groups, respectively. We have obtained the highest performance results using 2 s ECG segment than the state-of-art studies reported in the literature.

CONCLUSIONS : 1D-CADCapsNet model automatically learns the pertinent representations from raw ECG data without using any hand-crafted technique and can be used as a fast and accurate diagnostic tool to help cardiologists.

Butun Ertan, Yildirim Ozal, Talo Muhammed, Tan Ru-San, Rajendra Acharya U

2020-Jan-18

Capsule networks, Coronary artery disease, Deep learning, ECG signals

Public Health Public Health

Microbial indicators and molecular markers used to differentiate the source of faecal pollution in the Bogotá River (Colombia).

In International journal of hygiene and environmental health ; h5-index 50.0

Intestinal pathogenic microorganisms are introduced into the water by means of faecal contamination, thus creating a threat to public health and to the environment. Detecting these contaminants has been difficult due to such an analysis being costly and time-intensive; as an alternative, microbiological indicators have been used for this purpose, although they cannot differentiate between human or animal sources of contamination because these indicators are part of the digestive tracts of both. To identify the sources of faecal pollution, the use of chemical, microbiological and molecular markers has been proposed. Currently available markers present some geographical specificity. The aim of this study was to select microbial and molecular markers that could be used to differentiate the sources of faecal pollution in the Bogotá River and to use them as tools for the evaluation and identification of the origin of discharges and for quality control of the water. In addition to existing microbial source markers, a phage host strain (PZ8) that differentiates porcine contamination was isolated from porcine intestinal content. The strain was identified biochemically and genotypically as Bacteroides. The use of this strain as a microbial source tracking indicator was evaluated in bovine and porcine slaughterhouse wastewaters, raw municipal wastewaters and the Bogotá River. The results obtained indicate that the selected microbial and molecular markers enable the determination of the source of faecal contamination in the Bogotá River by using different algorithms to develop prediction models.

Sánchez-Alfonso Andrea C, Venegas Camilo, Díez Hugo, Méndez Javier, Blanch Anicet R, Jofre Joan, Campos Claudia

2020-Jan-09

Machine learning, Microbial source tracking, Porcine-specific marker, River water

General General

Incorporating biological structure into machine learning models in biomedicine.

In Current opinion in biotechnology ; h5-index 0.0

In biomedical applications of machine learning, relevant information often has a rich structure that is not easily encoded as real-valued predictors. Examples of such data include DNA or RNA sequences, gene sets or pathways, gene interaction or coexpression networks, ontologies, and phylogenetic trees. We highlight recent examples of machine learning models that use structure to constrain model architecture or incorporate structured data into model training. For machine learning in biomedicine, where sample size is limited and model interpretability is crucial, incorporating prior knowledge in the form of structured data can be particularly useful. The area of research would benefit from performant open source implementations and independent benchmarking efforts.

Crawford Jake, Greene Casey S

2020-Jan-18

Public Health Public Health

Understanding Opioid Use Disorder (OUD) using tree-based classifiers.

In Drug and alcohol dependence ; h5-index 64.0

BACKGROUND : Opioid Use Disorder (OUD), defined as a physical or psychological reliance on opioids, is a public health epidemic. Identifying adults likely to develop OUD can help public health officials in planning effective intervention strategies. The aim of this paper is to develop a machine learning approach to predict adults at risk for OUD and to identify interactions between various characteristics that increase this risk.

METHODS : In this approach, a data set was curated using the responses from the 2016 edition of the National Survey on Drug Use and Health (NSDUH). Using this data set, tree-based classifiers (decision tree and random forest) were trained, while employing downsampling to handle class imbalance. Predictions from the tree-based classifiers were also compared to the results from a logistic regression model. The results from the three classifiers were then interpreted synergistically to highlight individual characteristics and their interplay that pose a risk for OUD.

RESULTS : Random forest predicted adults at risk for OUD with remarkable accuracy, with the average area under the Receiver-Operating-Characteristics curve (AUC) over 0.89, even though the prevalence of OUD was only about 1 %. It showed a slight improvement over logistic regression. Logistic regression identified statistically significant characteristics, while random forest ranked the predictors in order of their contribution to OUD prediction. Early initiation of marijuana (before 18 years) emerged as the dominant predictor. Decision trees revealed that early marijuana initiation especially increased the risk if individuals: (i) were between 18-34 years of age, or (ii) had incomes less than $49,000, or (iii) were of Hispanic and White heritage, or (iv) were on probation, or (v) lived in neighborhoods with easy access to drugs.

CONCLUSIONS : Machine learning can accurately predict adults at risk for OUD, and identify interactions among the factors that pronounce this risk. Curbing early initiation of marijuana may be an effective prevention strategy against opioid addiction, especially in high risk groups.

Wadekar Adway S

2020-Jan-15

Machine learning, Marijuana, Opioid Use Disorder, Random forest

General General

A Multiple Filter Based Neural Network Approach to the Extrapolation of Adsorption Energies on Metal Surfaces for Catalysis Applications.

In Journal of chemical theory and computation ; h5-index 0.0

Computational catalyst discovery involves the development of microkinetic reactor models based on estimated parameters determined from density functional theory (DFT). For complex surface chemistries, the number of reaction intermediates can be very large and the cost of calculating the adsorption energies by DFT for all surface intermediates even for one active site model can become prohibitive. In this paper, we have identified appropriate descriptors and machine learning models that can be used to predict a significant part of these adsorption energies given data on the rest of them. Moreover, our investigations also included the case when the species data used to train the predictive model is of different size relative to the species the model tries to predict - this is an extrapolation in the data space which is typically difficult with regular machine learning models. Due to the relative size of the available datasets, we have attempted to extrapolate from the larger species to the smaller ones in the current work. Here, we have developed a neural network based predictive model that combines an established additive atomic contribution based model with the concepts of a convolutional neural network that, when extrapolating, achieves a statistically significant improvement over the previous models.

Chowdhury Asif J, Yang Wenqiang, Abdelfatah Kareem E, Zare Mehdi, Heyden Andreas, Terejanu Gabriel A

2020-Jan-21

General General

And the nominees are: Using design-awards datasets to build computational aesthetic evaluation model.

In PloS one ; h5-index 176.0

Aesthetic perception is a human instinct that is responsive to multimedia stimuli. Giving computers the ability to assess human sensory and perceptual experience of aesthetics is a well-recognized need for the intelligent design industry and multimedia intelligence study. In this work, we constructed a novel database for the aesthetic evaluation of design, using 2,918 images collected from the archives of two major design awards, and we also present a method of aesthetic evaluation that uses machine learning algorithms. Reviewers' ratings of the design works are set as the ground-truth annotations for the dataset. Furthermore, multiple image features are extracted and fused. The experimental results demonstrate the validity of the proposed approach. Primary screening using aesthetic computing can be an intelligent assistant for various design evaluations and can reduce misjudgment in art and design review due to visual aesthetic fatigue after a long period of viewing. The study of computational aesthetic evaluation can provide positive effect on the efficiency of design review, and it is of great significance to aesthetic recognition exploration and applications development.

Xing Baixi, Zhang Kejun, Zhang Lekai, Wu Xinda, Si Huahao, Zhang Hui, Zhu Kaili, Sun Shouqian

2020

General General

Threshold Tunable Spike Rate Dependent Plasticity Originated from Interfacial Proton Gating for Pattern Learning and Memory.

In ACS applied materials & interfaces ; h5-index 147.0

Recently, neuromorphic devices are getting increasing interests in the field of artificial intelligence (AI). Realization of fundamental synaptic plasticities on hard-ware devices would endow new intensions for neuromorphic devices. Spike-rate dependent plasticity (SRDP) is one of the most important synaptic learning mechanisms in brain cognitive behaviors. Thus, it is interesting to mimic the SRDP behaviors on solid-state neuromorphic devices. In the present work, nanogranular phosphorous silicate glass (PSG) based proton conductive electrolyte gated oxide neuromorphic transistors have been proposed. The oxide neuromorphic transistors have good transistor performances and frequency dependent synaptic plasticity behavior. Moreover, the neuromorphic transistor exhibits SRDP activities. Interestingly, by introducing priming synaptic stimuli, the modulation of threshold frequency value distinguishing synaptic potentiation from synaptic depression is realized for the first time on electrolyte gated neuromorphic transistor. Such mechanism can be well understood with interfacial proton gating effects of the nanogranular PSG based electrolyte. Furthermore, effects of SRDP learning rules on pattern learning and memory behaviors have been conceptually demonstrated. The proposed neuromorphic transistors have potential applications in neuromorphic engineering.

Ren Zheng Yu, Zhu Li Qiang, Guo Yan Bo, Long Ting Yu, Yu Fei, Xiao Hui, Lu Hong-Liang

2020-Jan-21

General General

Bacterial Taxa and Functions Are Predictive of Sustained Remission Following Exclusive Enteral Nutrition in Pediatric Crohn's Disease.

In Inflammatory bowel diseases ; h5-index 62.0

BACKGROUND : The gut microbiome is extensively involved in induction of remission in pediatric Crohn's disease (CD) patients by exclusive enteral nutrition (EEN). In this follow-up study of pediatric CD patients undergoing treatment with EEN, we employ machine learning models trained on baseline gut microbiome data to distinguish patients who achieved and sustained remission (SR) from those who did not achieve remission nor relapse (non-SR) by 24 weeks.

METHODS : A total of 139 fecal samples were obtained from 22 patients (8-15 years of age) for up to 96 weeks. Gut microbiome taxonomy was assessed by 16S rRNA gene sequencing, and functional capacity was assessed by metagenomic sequencing. We used standard metrics of diversity and taxonomy to quantify differences between SR and non-SR patients and to associate gut microbial shifts with fecal calprotectin (FCP), and disease severity as defined by weighted Pediatric Crohn's Disease Activity Index. We used microbial data sets in addition to clinical metadata in random forests (RFs) models to classify treatment response and predict FCP levels.

RESULTS : Microbial diversity did not change after EEN, but species richness was lower in low-FCP samples (<250 µg/g). An RF model using microbial abundances, species richness, and Paris disease classification was the best at classifying treatment response (area under the curve [AUC] = 0.9). KEGG Pathways also significantly classified treatment response with the addition of the same clinical data (AUC = 0.8). Top features of the RF model are consistent with previously identified IBD taxa, such as Ruminococcaceae and Ruminococcus gnavus.

CONCLUSIONS : Our machine learning approach is able to distinguish SR and non-SR samples using baseline microbiome and clinical data.

Jones Casey M A, Connors Jessica, Dunn Katherine A, Bielawski Joseph P, Comeau André M, Langille Morgan G I, Van Limbergen Johan

2020-Jan-21

exclusive enteral nutrition, gut microbiome, nutrition in pediatrics, pediatric Crohn’s disease

Surgery Surgery

[Role of artificial intelligence in the diagnosis and treatment of gastrointestinal diseases].

In Zhonghua wei chang wai ke za zhi = Chinese journal of gastrointestinal surgery ; h5-index 0.0

The rapid development of computer technologies brings us great changes in daily life and work. Artificial intelligence is a branch of computer science, which is to allow computers to exercise activities that are normally confined to intelligent life. The broad sense of artificial intelligence includes machine learning and robots. This article mainly focuses on machine learning and related medical fields, and deep learning is an artificial neural network in machine learning. Convolutional neural network (CNN) is a type of deep neural network, that is developed on the basis of deep neural network, further imitating the structure of the visual cortex of the brain and the principle of visual activity. The current machine learning method used in medical big data analysis is mainly CNN. In the next few years, it is the developing trend that artificial intelligence as a conventional tool will enter the relevant departments of medical image interpretation. In addition, this article also shares the progress of the integration of artificial intelligence and biomedicine combined with actual cases, and mainly introduces the current status of CNN application research in pathological diagnosis, imaging diagnosis and endoscopic diagnosis for gastrointestinal diseases.

Yu Y Y

2020-Jan-25

Artificial intelligence, Convolutional neural network, Deep learning, Medical images

General General

Autologous cell replacement: a noninvasive AI approach to clinical release testing.

In The Journal of clinical investigation ; h5-index 129.0

The advent of human induced pluripotent stem cells (iPSCs) provided a means for avoiding ethical concerns associated with the use of cells isolated from human embryos. The number of labs now using iPSCs to generate photoreceptor, retinal pigmented epithelial (RPE), and more recently choroidal endothelial cells has grown exponentially. However, for autologous cell replacement to be effective, manufacturing strategies will need to change. Many tasks carried out by hand will need simplifying and automating. In this issue of the JCI, Schaub and colleagues combined quantitative brightfield microscopy and artificial intelligence (deep neural networks and traditional machine learning) to noninvasively monitor iPSC-derived graft maturation, predict donor cell identity, and evaluate graft function prior to transplantation. This approach allowed the authors to preemptively identify and remove abnormal grafts. Notably, the method is (a) transferable, (b) cost- and time effective, (c) high throughput, and (d) useful for primary product validation.

Tucker Budd A, Mullins Robert F, Stone Edwin M

2020-Jan-21

General General

Developing a Model to Predict Hospital Encounters for Asthma in Asthmatic Patients: Secondary Analysis.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : As a major chronic disease, asthma causes many emergency department (ED) visits and hospitalizations each year. Predictive modeling is a key technology to prospectively identify high-risk asthmatic patients and enroll them in care management for preventive care to reduce future hospital encounters, including inpatient stays and ED visits. However, existing models for predicting hospital encounters in asthmatic patients are inaccurate. Usually, they miss over half of the patients who will incur future hospital encounters and incorrectly classify many others who will not. This makes it difficult to match the limited resources of care management to the patients who will incur future hospital encounters, increasing health care costs and degrading patient outcomes.

OBJECTIVE : The goal of this study was to develop a more accurate model for predicting hospital encounters in asthmatic patients.

METHODS : Secondary analysis of 334,564 data instances from Intermountain Healthcare from 2005 to 2018 was conducted to build a machine learning classification model to predict the hospital encounters for asthma in the following year in asthmatic patients. The patient cohort included all asthmatic patients who resided in Utah or Idaho and visited Intermountain Healthcare facilities during 2005 to 2018. A total of 235 candidate features were considered for model building.

RESULTS : The model achieved an area under the receiver operating characteristic curve of 0.859 (95% CI 0.846-0.871). When the cutoff threshold for conducting binary classification was set at the top 10.00% (1926/19,256) of asthmatic patients with the highest predicted risk, the model reached an accuracy of 90.31% (17,391/19,256; 95% CI 89.86-90.70), a sensitivity of 53.7% (436/812; 95% CI 50.12-57.18), and a specificity of 91.93% (16,955/18,444; 95% CI 91.54-92.31). To steer future research on this topic, we pinpointed several potential improvements to our model.

CONCLUSIONS : Our model improves the state of the art for predicting hospital encounters for asthma in asthmatic patients. After further refinement, the model could be integrated into a decision support tool to guide asthma care management allocation.

INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) : RR2-10.2196/resprot.5039.

Luo Gang, He Shan, Stone Bryan L, Nkoy Flory L, Johnson Michael D

2020-Jan-21

General General

Teaching Hands-On Informatics Skills to Future Health Informaticians: A Competency Framework Proposal and Analysis of Health Care Informatics Curricula.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Existing health informatics curriculum requirements mostly use a competency-based approach rather than a skill-based one.

OBJECTIVE : The main objective of this study was to assess the current skills training requirements in graduate health informatics curricula to evaluate graduate students' confidence in specific health informatics skills.

METHODS : A quantitative cross-sectional observational study was developed to evaluate published health informatics curriculum requirements and to determine the comprehensive health informatics skill sets required in a research university in New York, United States. In addition, a questionnaire to assess students' confidence about specific health informatics skills was developed and sent to all enrolled and graduated Master of Science students in a health informatics program.

RESULTS : The evaluation was performed in a graduate health informatics program, and analysis of the students' self-assessments questionnaire showed that 79.4% (81/102) of participants were not confident (not at all confident or slightly confident) about developing an artificial intelligence app, 58.8% (60/102) were not confident about designing and developing databases, and 54.9% (56/102) were not confident about evaluating privacy and security infrastructure. Less than one-third of students (24/105, 23.5%) were confident (extremely confident and very confident) that they could evaluate the use of data capture technologies and develop mobile health informatics apps (10/102, 9.8%).

CONCLUSIONS : Health informatics programs should consider specialized tracks that include specific skills to meet the complex health care delivery and market demand, and specific training components should be defined for different specialties. There is a need to determine new competencies and skill sets that promote inductive and deductive reasoning from diverse and various data platforms and to develop a comprehensive curriculum framework for health informatics skills training.

Sapci A Hasan, Sapci H Aylin

2020-Jan-21

hands-on health informatics training, health informatics curriculum, skill-based training

Public Health Public Health

Performance Evaluation of Ozone and Particulate Matter Sensors.

In Journal of the Air & Waste Management Association (1995) ; h5-index 0.0

As public awareness and concern about air quality grows, companies and researchers have begun to develop small, low- cost sensors to measure local air quality. These sensors have been used in citizen science projects, in distributed networks within cities, and in combination with public health studies on asthma and other air-quality associated diseases. However, sensor long-term performance under different environmental conditions and pollutant levels is not fully understood. In addition, further evaluation is needed for other long-term performance trends such as performance among sensors of the same model, comparison between sensors from different companies and comparison of sensor data to federal equivalence or reference method (FEM/FRM) measurements. A 10-month evaluation of two popular particulate matter (PM) sensors, Dylos DC1100 and AirBeam, and a popular ozone (O3) sensor, Aeroqual 500, was performed as part of this study. Data from these sensors were compared to each other and to FEM/FRM data and local meteorology. The study took place at the Houston Regional Monitoring (HRM) site 3, located between the Houston Ship Channel and Houston's urban center. PM sensor performance was found to vary in time, with multivariate analysis, binning of data by meteorological parameter, and machine learning techniques able to account for some but not all performance variations. PM type (i.e., size distribution, fiber-flake-spheroid shape and black-brown-white color) likely played a role in the changing sensor performance. Triplicate individual Aeroqual O3 sensors tracked reasonably well with the FEM data for most of the measurement period but had irregular periods of O3 measurement offset. While the FEM data indicated 4 days where ozone levels were above the NAAQS, the Aeroqual ozone sensors indicated a substantially higher number of days, ranging from 9 to 16 for the three sensors.

DeWitt H Langley, Crow Walter L, Flowers Bradley

2020-Jan-21

General General

Single-Step Preprocessing of Raman Spectra Using Convolutional Neural Networks.

In Applied spectroscopy ; h5-index 0.0

Preprocessing of Raman spectra is generally done in three separate steps: (1) cosmic ray removal, (2) signal smoothing, and (3) baseline subtraction. We show that a convolutional neural network (CNN) can be trained using simulated data to handle all steps in one operation. First, synthetic spectra are created by randomly adding peaks, baseline, mixing of peaks and baseline with background noise, and cosmic rays. Second, a CNN is trained on synthetic spectra and known peaks. The results from preprocessing were generally of higher quality than what was achieved using a reference based on standardized methods (second-difference, asymmetric least squares, cross-validation). From 105 simulated observations, 91.4% predictions had smaller absolute error (RMSE), 90.3% had improved quality (SSIM), and 94.5% had reduced signal-to-noise (SNR) power. The CNN preprocessing generated reliable results on measured Raman spectra from polyethylene, paraffin and ethanol with background contamination from polystyrene. The result shows a promising proof of concept for the automated preprocessing of Raman spectra.

Wahl Joel, Sjödahl Mikael, Ramser Kerstin

2020-Jan-21

CNN, Raman spectroscopy, chemometrics, convolutional neural network, deep learning, preprocessing, simulated data

Ophthalmology Ophthalmology

Cost-effectiveness analysis of ocriplasmin versus watchful waiting for treatment of symptomatic vitreomacular adhesion in the US.

In Journal of comparative effectiveness research ; h5-index 0.0

Aim: Evaluate the cost-effectiveness of ocriplasmin in symptomatic vitreomacular adhesion (VMA) with or without full-thickness macular hole ≤400 μm versus standard of care. Methods: A state-transition model simulated a cohort through disease health states; assignment of utilities to health states reflected the distribution of visual acuity. Efficacy of ocriplasmin was derived from logistic regression models using Ocriplasmin for Treatment for Symptomatic Vitreomacular Adhesion Including Macular Hole trial data. Model inputs were extracted from Phase III trials and published literature. The analysis was conducted from a US Medicare perspective. Results: Lifetime incremental cost-effectiveness ratio was US$4887 per quality-adjusted life year gained in the total population, US$4255 and US$10,167 in VMA subgroups without and with full-thickness macular hole, respectively. Conclusion: Ocriplasmin was cost effective compared with standard of care in symptomatic VMA.

Khanani Arshad M, Dugel Pravin U, Haller Julia A, Wagner Alan L, Lescrauwaet Benedicte, Schmidt Ralph, Bennison Craig

2020-Jan-21

cost, ocriplasmin, symptomatic vitreomacular adhesion, vitreomacular traction

General General

[Chapter 6. Hybridisation of networks.]

In Journal international de bioethique et d'ethique des sciences ; h5-index 0.0

Prompted by the digital revolution, the hybridisation of networks, terrestrial and on satellites, opens the door to a world of convergences, dominated by the Internet of Objects and the development of artificial intelligence.

Rapp Lucien

2019-09

Radiology Radiology

Future Directions in Coronary CT Angiography: CT-Fractional Flow Reserve, Plaque Vulnerability, and Quantitative Plaque Assessment.

In Korean circulation journal ; h5-index 0.0

Coronary computed tomography angiography (CCTA) is a well-validated and noninvasive imaging modality for the assessment of coronary artery disease (CAD) in patients with stable ischemic heart disease and acute coronary syndromes (ACSs). CCTA not only delineates the anatomy of the heart and coronary arteries in detail, but also allows for intra- and extraluminal imaging of coronary arteries. Emerging technologies have promoted new CCTA applications, resulting in a comprehensive assessment of coronary plaques and their clinical significance. The application of computational fluid dynamics to CCTA resulted in a robust tool for noninvasive assessment of coronary blood flow hemodynamics and determination of hemodynamically significant stenosis. Detailed evaluation of plaque morphology and identification of high-risk plaque features by CCTA have been confirmed as predictors of future outcomes, identifying patients at risk for ACSs. With quantitative coronary plaque assessment, the progression of the CAD or the response to therapy could be monitored by CCTA. The aim of this article is to review the future directions of emerging applications in CCTA, such as computed tomography (CT)-fractional flow reserve, imaging of vulnerable plaque features, and quantitative plaque imaging. We will also briefly discuss novel methods appearing in the coronary imaging scenario, such as machine learning, radiomics, and spectral CT.

Kay Fernando Uliana, Canan Arzu, Abbara Suhny

2019-Nov-05

Coronary computed tomography angiography, Coronary plaque, Fractional flow reserve, Plaque characterization, Plaque volume

Dermatology Dermatology

What is AI? Applications of artificial intelligence to dermatology.

In The British journal of dermatology ; h5-index 0.0

In the past, the skills required to make an accurate dermatological diagnosis have required exposure to thousands of patients over many years. However, in recent years, artificial intelligence (AI) has made enormous advances, particularly in the area of image classification. This has led computer scientists to apply these techniques to develop algorithms that are able to recognise skin lesions, particularly melanoma. Since 2017, there have been numerous studies assessing the accuracy of algorithms with some reporting that accuracy matches or surpasses that of a dermatologist. Whilst the principles underlying these methods are relatively straightforward, it can be challenging for the practising dermatologist to make sense of a plethora of unfamiliar terms in this domain. Here, we explain the concepts of artificial intelligence, machine learning, neural networks and deep learning, and explore the principles of how these tasks are accomplished. We critically evaluate the studies that assess the efficacy of these methods and discuss limitations and potential ethical issues. The burden of skin cancer is growing within the Western world, with major implications for both population skin health, and the provision of dermatology services. AI has the potential to assist in the diagnosis of skin lesions and may have particular value at the interface between primary and secondary care. The emerging technology represents an exciting opportunity for dermatologists, who are the individuals best informed to explore the utility of this powerful novel diagnostic tool, and facilitate its safe and ethical implementation within healthcare systems.

Du-Harpur X, Watt F M, Luscombe N M, Lynch M D

2020-Jan-20

General General

[The Development of Early Warning Systems for Home/Community Elderly Care].

In Hu li za zhi The journal of nursing ; h5-index 0.0

With Taiwan now an "aged society", home safety for older individuals has become a very important issue. The purpose of establishing early warning systems in homes and/or communities is to generate and disseminate meaningful warning information to medical institutions or rescue units in a timely manner so that they may take timely and appropriate action. The main purpose of this paper is to introduce the current application of information and communication technology (ICT, especially the Internet of Things and artificial intelligence) in early warning systems for home and community care. Two approaches to developing these systems are introduced: instant detection and prevention monitoring. Instant detection facilitates fall detection and personnel tracking, while the focus of prevention monitoring is on preventing falls and physiological status monitoring. The challenges faced by in incorporating ICT into these early monitoring systems are discussed as well.

Pan Jiann-I

2020-Feb

Internet of Things (IoT), artificial intelligence (AI), early warning system, elderly safety, home/community care

oncology Oncology

Identification of a Sixteen-gene Prognostic Biomarker for Lung Adenocarcinoma Using a Machine Learning Method.

In Journal of Cancer ; h5-index 0.0

Objectives: Lung adenocarcinoma (LUAD) accounts for a majority of cancer-related deaths worldwide annually. The identification of prognostic biomarkers and prediction of prognosis for LUAD patients is necessary. Materials and Methods: In this study, LUAD RNA-Seq data and clinical data from the Cancer Genome Atlas (TCGA) were divided into TCGA cohort I (n = 338) and II (n = 168). The cohort I was used for model construction, and the cohort II and data from Gene Expression Omnibus (GSE72094 cohort, n = 393; GSE11969 cohort, n = 149) were utilized for validation. First, the survival-related seed genes were selected from the cohort I using the machine learning model (random survival forest, RSF), and then in order to improve prediction accuracy, the forward selection model was utilized to identify the prognosis-related key genes among the seed genes using the clinically-integrated RNA-Seq data. Second, the survival risk score system was constructed by using these key genes in the cohort II, the GSE72094 cohort and the GSE11969 cohort, and the evaluation metrics such as HR, p value and C-index were calculated to validate the proposed method. Third, the developed approach was compared with the previous five prediction models. Finally, bioinformatics analyses (pathway, heatmap, protein-gene interaction network) have been applied to the identified seed genes and key genes. Results and Conclusion: Based on the RSF model and clinically-integrated RNA-Seq data, we identified sixteen key genes that formed the prognostic gene expression signature. These sixteen key genes could achieve a strong power for prognostic prediction of LUAD patients in cohort II (HR = 3.80, p = 1.63e-06, C-index = 0.656), and were further validated in the GSE72094 cohort (HR = 4.12, p = 1.34e-10, C-index = 0.672) and GSE11969 cohort (HR = 3.87, p = 6.81e-07, C-index = 0.670). The experimental results of three independent validation cohorts showed that compared with the traditional Cox model and the use of standalone RNA-Seq data, the machine-learning-based method effectively improved the prediction accuracy of LUAD prognosis, and the derived model was also superior to the other five existing prediction models. KEGG pathway analysis found eleven of the sixteen genes were associated with Nicotine addiction. Thirteen of the sixteen genes were reported for the first time as the LUAD prognosis-related key genes. In conclusion, we developed a sixteen-gene prognostic marker for LUAD, which may provide a powerful prognostic tool for precision oncology.

Ma Baoshan, Geng Yao, Meng Fanyu, Yan Ge, Song Fengju

2020

Forward selection model, Lung adenocarcinoma, Prognosis prediction, RNA-Seq data, Random survival forest

General General

Toward an Aggregate, Implicit, and Dynamic Model of Norm Formation: Capturing Large-Scale Media Representations of Dynamic Descriptive Norms Through Automated and Crowdsourced Content Analysis.

In The Journal of communication ; h5-index 0.0

Media content can shape people's descriptive norm perceptions by presenting either population-level prevalence information or descriptions of individuals' behaviors. Supervised machine learning and crowdsourcing can be combined to answer new, theoretical questions about the ways in which normative perceptions form and evolve through repeated, incidental exposure to normative mentions emanating from the media environment. Applying these methods, this study describes tobacco and e-cigarette norm prevalence and trends over 37 months through an examination of a census of 135,764 long-form media texts, 12,262 popular YouTube videos, and 75,322,911 tweets. Long-form texts mentioned tobacco population norms (4-5%) proportionately less often than e-cigarette population norms (20%). Individual use norms were common across sources, particularly YouTube (tobacco long-form: 34%; Twitter: 33%; YouTube: 88%; e-cigarette long form: 17%; Twitter: 16%; YouTube: 96%). The capacity to capture aggregated prevalence and temporal dynamics of normative media content permits asking population-level media effects questions that would otherwise be infeasible to address.

Liu Jiaying, Siegel Leeann, Gibson Laura A, Kim Yoonsang, Binns Steven, Emery Sherry, Hornik Robert C

2019-Dec

Content Analysis, Crowdsourced Coding, Descriptive Social Norms, E-cigarettes, Smoking, Supervised Machine Learning, Tobacco

Radiology Radiology

[Development of CT Pelvimetry Using Deep Learning Based Reconstruction].

In Nihon Hoshasen Gijutsu Gakkai zasshi ; h5-index 0.0

PURPOSE : X-ray pelvimetry is typically performed for the diagnosis of the cephalopelvic disproportion (CPD). The purpose of this study was to assess the utility of new computed tomography (CT) reconstruction "deep learning based reconstruction (DLR) " in ultra-low dose CT pelvimetry.

METHOD : CT pelvimetry was performed 320-row CT. All CT images were reconstructed with and without DLR and transferred for workstation to processing martius and guthmann view. Radiologist and obstetrician-gynecologist subjectively ranked overall image quality of each CT image from the best to the worst. Exposure dose of the CT pelvimetry used a following calculated value, displayed CT dose index (CTDI) vol multiplied by measured value using the thimble chamber and pelvic phantom, and of the X-ray pelvimetry used Japan-Diagnositic Refernce Levels 2015 as a reference, were compared.

RESULT : 3D images obtained from CT pelvimetry with DLR showed accurate biparietal diameter and obstetric conjugate as compared to without DLR. Radiation dose of CT pelvimetry is 0.39 mGy, of X-ray pelvimetry is 1.18 mGy, respectively. Conculusion: Although the visualizing high contrast object, such as bone morphology, is likely to reduce exposure dose in CT examination generally, DLR enable to further dose reduction to keep image quality. 3D image processing from CT pelvimetry solves the problem of expansion rate in X-P pelvimetry and provide accurate measurements. Furthermore, CT pelvimetry can undergo more comfortable position for Pregnant Woman in Labor.

Kitai Takaaki, Hyodo Yasuhiro, Morikawa Hayato

2020

Guthmann, Martius, cephalopelvic disproportion, computed tomography pelvimetry, deep learning based reconstruction

Cardiology Cardiology

The association between renin angiotensin aldosterone system blockers and future osteoporotic fractures in a hypertensive population - A population-based cohort study in Taiwan.

In International journal of cardiology ; h5-index 68.0

Some cohort studies showed the possibility of renin-angiotensin-aldosterone system (RAAS) blockade in preventing future osteoporotic fractures. The study aimed to evaluate the association between angiotensin converting enzyme inhibitors (ACEIs), angiotensin II receptor blockers (ARBs), and future osteoporotic fracture in a hypertensive population. We queried the Taiwan Longitudinal Health Insurance Database between 2001 and 2012. We used propensity score matching and the total cohort was made up of 57,470 participants (28,735 matched-pairs using or not using RAAS blockers). The mean follow-up period was 6 years. The number of incident fractures was 3757. Hazard ratios (HRs) [95% confidence interval (CI)] of ACEIs and ARBs use with incident fractures were calculated. The incidence of future osteoporotic fracture was significantly lower in the ACEI and ARB user groups but not in the group using an ACEI plus ARB concomitantly, when compared with RAAS blocker nonusers. Comparing ACEI users with RAAS blocker non-users and ARB users with RAAS blocker non-users, the HRs for composite fractures were 0.70 (0.62-0.79) and 0.58 (0.51-0.65), respectively. Sensitivity analysis confirmed a lower incidence of future osteoporotic fracture in patients taking an ACEI for >55 cumulative defined daily doses (cDDDs) and those who received an ARB for >90 cDDDs. These results suggested a lower incidence of future osteoporotic fracture in a hypertensive population who were using an ACEI or ARB compared with RAAS blocker nonusers but not in the group taking an ACEI and ARB concomitantly.

Kao Yung-Ta, Huang Chun-Yao, Fang Yu-Ann, Liu Ju-Chi

2020-Jan-07

Hypertension, Outcome, Renin angiotensin system

General General

Intelligent fault identification for industrial automation system via multi-scale convolutional generative adversarial network with partially labeled samples.

In ISA transactions ; h5-index 0.0

Rolling bearings are the widely used parts in most of the industrial automation systems. As a result, intelligent fault identification of rolling bearing is important to ensure the stable operation of the industrial automation systems. However, a major problem in intelligent fault identification is that it needs a large number of labeled samples to obtain a well-trained model. Aiming at this problem, the paper proposes a semi-supervised multi-scale convolutional generative adversarial network for bearing fault identification which uses partially labeled samples and sufficient unlabeled samples for training. The network adopts a one-dimensional multi-scale convolutional neural network as the discriminator and a multi-scale deconvolutional neural network as the generator and the model is trained through an adversarial process. Because of the full use of unlabeled samples, the proposed semi-supervised model can detect the faults in bearings with limited labeled samples. The proposed method is tested on three datasets and the average classification accuracy arrived at of 100%, 99.28% and 96.58% respectively Results indicate that the proposed semi-supervised convolutional generative adversarial network achieves satisfactory performance in bearing fault identification when the labeled data are insufficient.

Pan Tongyang, Chen Jinglong, Xie Jinsong, Chang Yuanhong, Zhou Zitong

2020-Jan-14

Deep learning, Fault diagnosis, Intelligent fault identification, Rolling bearing

General General

Reconciling Dimensional and Categorical Models of Autism Heterogeneity: A Brain Connectomics and Behavioral Study.

In Biological psychiatry ; h5-index 105.0

BACKGROUND : Heterogeneity in autism spectrum disorder (ASD) has hindered the development of biomarkers, thus motivating subtyping efforts. Most subtyping studies divide individuals with ASD into nonoverlapping (categorical) subgroups. However, continuous interindividual variation in ASD suggests that there is a need for a dimensional approach.

METHODS : A Bayesian model was employed to decompose resting-state functional connectivity (RSFC) of individuals with ASD into multiple abnormal RSFC patterns, i.e., categorical subtypes, henceforth referred to as "factors." Importantly, the model allowed each individual to express one or more factors to varying degrees (dimensional subtyping). The model was applied to 306 individuals with ASD (5.2-57 years of age) from two multisite repositories. Post hoc analyses associated factors with symptoms and demographics.

RESULTS : Analyses yielded three factors with dissociable whole-brain hypo- and hyper-RSFC patterns. Most participants expressed multiple (categorical) factors, suggestive of a mosaic of subtypes within individuals. All factors shared abnormal RSFC involving the default mode network, but the directionality (hypo- or hyper-RSFC) differed across factors. Factor 1 was associated with core ASD symptoms. Factors 1 and 2 were associated with distinct comorbid symptoms. Older male participants preferentially expressed factor 3. Factors were robust across control analyses and were not associated with IQ or head motion.

CONCLUSIONS : There exist at least three ASD factors with dissociable whole-brain RSFC patterns, behaviors, and demographics. Heterogeneous default mode network hypo- and hyper-RSFC across the factors might explain previously reported inconsistencies. The factors differentiated between core ASD and comorbid symptoms-a less appreciated domain of heterogeneity in ASD. These factors are coexpressed in individuals with ASD with different degrees, thus reconciling categorical and dimensional perspectives of ASD heterogeneity.

Tang Siyi, Sun Nanbo, Floris Dorothea L, Zhang Xiuming, Di Martino Adriana, Yeo B T Thomas

2019-Nov-18

ASD heterogeneity, Bayesian modeling, Behavioral deficits, Default mode network, Phenotypes, Resting-state functional connectivity

General General

Rethinking arithmetic for deep neural networks.

In Philosophical transactions. Series A, Mathematical, physical, and engineering sciences ; h5-index 0.0

We consider efficiency in the implementation of deep neural networks. Hardware accelerators are gaining interest as machine learning becomes one of the drivers of high-performance computing. In these accelerators, the directed graph describing a neural network can be implemented as a directed graph describing a Boolean circuit. We make this observation precise, leading naturally to an understanding of practical neural networks as discrete functions, and show that the so-called binarized neural networks are functionally complete. In general, our results suggest that it is valuable to consider Boolean circuits as neural networks, leading to the question of which circuit topologies are promising. We argue that continuity is central to generalization in learning, explore the interaction between data coding, network topology, and node functionality for continuity and pose some open questions for future research. As a first step to bridging the gap between continuous and Boolean views of neural network accelerators, we present some recent results from our work on LUTNet, a novel Field-Programmable Gate Array inference approach. Finally, we conclude with additional possible fruitful avenues for research bridging the continuous and discrete views of neural networks. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

Constantinides G A

2020-Mar-06

accelerator, computing, field-programmable gate array, neural network

General General

Optimal memory-aware backpropagation of deep join networks.

In Philosophical transactions. Series A, Mathematical, physical, and engineering sciences ; h5-index 0.0

Deep learning training memory needs can prevent the user from considering large models and large batch sizes. In this work, we propose to use techniques from memory-aware scheduling and automatic differentiation (AD) to execute a backpropagation graph with a bounded memory requirement at the cost of extra recomputations. The case of a single homogeneous chain, i.e. the case of a network whose stages are all identical and form a chain, is well understood and optimal solutions have been proposed in the AD literature. The networks encountered in practice in the context of deep learning are much more diverse, both in terms of shape and heterogeneity. In this work, we define the class of backpropagation graphs, and extend those on which one can compute in polynomial time a solution that minimizes the total number of recomputations. In particular, we consider join graphs which correspond to models such as siamese or cross-modal networks. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

Beaumont Olivier, Herrmann Julien, Pallez Aupy Guillaume, Shilova Alena

2020-Mar-06

backpropagation, memory, pebble game

General General

Engineering Stability, Viscosity, and Immunogenicity of Antibodies by Computational Design.

In Journal of pharmaceutical sciences ; h5-index 0.0

In recent years, computational methods have garnered much attention in protein engineering. A large number of computational methods have been developed to analyze the sequences and structures of proteins and have been used to predict the various properties. Antibodies are one of the emergent protein therapeutics, and thus methods to control their physicochemical properties are highly desirable. However, despite the tremendous efforts of past decades, computational methods to predict the physicochemical properties of antibodies are still in their infancy. Experimental validations are certainly required for real-world applications, and the results should be interpreted with caution. Among the various properties of antibodies, we focus in this review on stability, viscosity, and immunogenicity, and we present the current status of computational methods to engineer such properties.

Kuroda Daisuke, Tsumoto Kouhei

2020-Jan-17

Antibody engineering, Colloidal stability, Computer-aided design, Conformational stability, Immunogenicity, Machine learning, Molecular simulations, Viscosity

Radiology Radiology

Use of artificial intelligence in imaging in rheumatology - current status and future perspectives.

In RMD open ; h5-index 32.0

After decades of basic research with many setbacks, artificial intelligence (AI) has recently obtained significant breakthroughs, enabling computer programs to outperform human interpretation of medical images in very specific areas. After this shock wave that probably exceeds the impact of the first AI victory of defeating the world chess champion in 1997, some reflection may be appropriate on the consequences for clinical imaging in rheumatology. In this narrative review, a short explanation is given about the various AI techniques, including 'deep learning', and how these have been applied to rheumatological imaging, focussing on rheumatoid arthritis and systemic sclerosis as examples. By discussing the principle limitations of AI and deep learning, this review aims to give insight into possible future perspectives of AI applications in rheumatology.

Stoel Berend

2020-Jan

magnetic resonance imaging, outcomes research, rheumatoid arthritis, systemic sclerosis

Public Health Public Health

Towards Responsible Implementation of Monitoring Technologies in Institutional Care.

In The Gerontologist ; h5-index 0.0

Increasing awareness of errors and harms in institutional care settings, combined with rapid advancements in artificial intelligence, have resulted in a widespread push for implementing monitoring technologies in institutional settings. There has been limited critical reflection in gerontology regarding the ethical, social, and policy implications of using these technologies. We critically review current scholarship regarding use of monitoring technology in institutional care, and identify key gaps in knowledge and important avenues for future research and development.

Grigorovich Alisa, Kontos Pia

2020-Jan-20

Artificial intelligence, Ethics, Health, Health equity, Public policy

Public Health Public Health

Accuracy and Effects of Clinical Decision Support Systems Integrated With BMJ Best Practice-Aided Diagnosis: Interrupted Time Series Study.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Clinical decision support systems (CDSS) are an integral component of health information technologies and can assist disease interpretation, diagnosis, treatment, and prognosis. However, the utility of CDSS in the clinic remains controversial.

OBJECTIVE : The aim is to assess the effects of CDSS integrated with British Medical Journal (BMJ) Best Practice-aided diagnosis in real-world research.

METHODS : This was a retrospective, longitudinal observational study using routinely collected clinical diagnosis data from electronic medical records. A total of 34,113 hospitalized patient records were successively selected from December 2016 to February 2019 in six clinical departments. The diagnostic accuracy of the CDSS was verified before its implementation. A self-controlled comparison was then applied to detect the effects of CDSS implementation. Multivariable logistic regression and single-group interrupted time series analysis were used to explore the effects of CDSS. The sensitivity analysis was conducted using the subgroup data from January 2018 to February 2019.

RESULTS : The total accuracy rates of the recommended diagnosis from CDSS were 75.46% in the first-rank diagnosis, 83.94% in the top-2 diagnosis, and 87.53% in the top-3 diagnosis in the data before CDSS implementation. Higher consistency was observed between admission and discharge diagnoses, shorter confirmed diagnosis times, and shorter hospitalization days after the CDSS implementation (all P<.001). Multivariable logistic regression analysis showed that the consistency rates after CDSS implementation (OR 1.078, 95% CI 1.015-1.144) and the proportion of hospitalization time 7 days or less (OR 1.688, 95% CI 1.592-1.789) both increased. The interrupted time series analysis showed that the consistency rates significantly increased by 6.722% (95% CI 2.433%-11.012%, P=.002) after CDSS implementation. The proportion of hospitalization time 7 days or less significantly increased by 7.837% (95% CI 1.798%-13.876%, P=.01). Similar results were obtained in the subgroup analysis.

CONCLUSIONS : The CDSS integrated with BMJ Best Practice improved the accuracy of clinicians' diagnoses. Shorter confirmed diagnosis times and hospitalization days were also found to be associated with CDSS implementation in retrospective real-world studies. These findings highlight the utility of artificial intelligence-based CDSS to improve diagnosis efficiency, but these results require confirmation in future randomized controlled trials.

Tao Liyuan, Zhang Chen, Zeng Lin, Zhu Shengrong, Li Nan, Li Wei, Zhang Hua, Zhao Yiming, Zhan Siyan, Ji Hong

2020-Jan-20

BMJ Best Practice, accuracy and effect, aided diagnosis, artificial intelligence, clinical decision support systems

Public Health Public Health

Approaches for missing covariate data in logistic regression with MNAR sensitivity analyses.

In Biometrical journal. Biometrische Zeitschrift ; h5-index 0.0

Data with missing covariate values but fully observed binary outcomes are an important subset of the missing data challenge. Common approaches are complete case analysis (CCA) and multiple imputation (MI). While CCA relies on missing completely at random (MCAR), MI usually relies on a missing at random (MAR) assumption to produce unbiased results. For MI involving logistic regression models, it is also important to consider several missing not at random (MNAR) conditions under which CCA is asymptotically unbiased and, as we show, MI is also valid in some cases. We use a data application and simulation study to compare the performance of several machine learning and parametric MI methods under a fully conditional specification framework (MI-FCS). Our simulation includes five scenarios involving MCAR, MAR, and MNAR under predictable and nonpredictable conditions, where "predictable" indicates missingness is not associated with the outcome. We build on previous results in the literature to show MI and CCA can both produce unbiased results under more conditions than some analysts may realize. When both approaches were valid, we found that MI-FCS was at least as good as CCA in terms of estimated bias and coverage, and was superior when missingness involved a categorical covariate. We also demonstrate how MNAR sensitivity analysis can build confidence that unbiased results were obtained, including under MNAR-predictable, when CCA and MI are both valid. Since the missingness mechanism cannot be identified from observed data, investigators should compare results from MI and CCA when both are plausibly valid, followed by MNAR sensitivity analysis.

Ward Ralph C, Axon Robert Neal, Gebregziabher Mulugeta

2020-Jan-20

logistic regression, missing covariates, multiple imputation, predictable missingness sensitivity analysis

Ophthalmology Ophthalmology

Validation of Deep Convolutional Neural Network-based algorithm for detection of diabetic retinopathy - Artificial intelligence versus clinician for screening.

In Indian journal of ophthalmology ; h5-index 0.0

Purpose : Deep learning is a newer and advanced subfield in artificial intelligence (AI). The aim of our study is to validate a machine-based algorithm developed based on deep convolutional neural networks as a tool for screening to detect referable diabetic retinopathy (DR).

Methods : An AI algorithm to detect DR was validated at our hospital using an internal dataset consisting of 1,533 macula-centered fundus images collected retrospectively and an external validation set using Methods to Evaluate Segmentation and Indexing Techniques in the field of Retinal Ophthalmology (MESSIDOR) dataset. Images were graded by two retina specialists as any DR, prompt referral (moderate nonproliferative diabetic retinopathy (NPDR) or above or presence of macular edema) and sight-threatening DR/STDR (severe NPDR or above) and compared with AI results. Sensitivity, specificity, and area under curve (AUC) for both internal and external validation sets for any DR detection, prompt referral, and STDR were calculated. Interobserver agreement using kappa value was calculated for both the sets and two out of three agreements for DR grading was considered as ground truth to compare with AI results.

Results : In the internal validation set, the overall sensitivity and specificity was 99.7% and 98.5% for Any DR detection and 98.9% and 94.84%for Prompt referral respectively. The AUC was 0.991 and 0.969 for any DR detection and prompt referral respectively. The agreement between two observers was 99.5% and 99.2% for any DR detection and prompt referral with a kappa value of 0.94 and 0.96, respectively. In the external validation set (MESSIDOR 1), the overall sensitivity and specificity was 90.4% and 91.0% for any DR detection and 94.7% and 97.4% for prompt referral, respectively. The AUC was. 907 and. 960 for any DR detection and prompt referral, respectively. The agreement between two observers was 98.5% and 97.8% for any DR detection and prompt referral with a kappa value of 0.971 and 0.980, respectively.

Conclusion : With increasing diabetic population and growing demand supply gap in trained resources, AI is the future for early identification of DR and reducing blindness. This can revolutionize telescreening in ophthalmology, especially where people do not have access to specialized health care.

Shah Payal, Mishra Divyansh K, Shanmugam Mahesh P, Doshi Bindiya, Jayaraj Hariprasad, Ramanjulu Rajesh

2020-Feb

Deep convolutional neural networks, diabetic retinopathy screening, machine learning, validation of artificial intelligence

Surgery Surgery

Medios- An offline, smartphone-based artificial intelligence algorithm for the diagnosis of diabetic retinopathy.

In Indian journal of ophthalmology ; h5-index 0.0

Purpose : An observational study to assess the sensitivity and specificity of the Medios smartphone-based offline deep learning artificial intelligence (AI) software to detect diabetic retinopathy (DR) compared with the image diagnosis of ophthalmologists.

Methods : Patients attending the outpatient services of a tertiary center for diabetes care underwent 3-field dilated retinal imaging using the Remidio NM FOP 10. Two fellowship-trained vitreoretinal specialists separately graded anonymized images and a patient-level diagnosis was reached based on grading of the worse eye. The images were subjected to offline grading using the Medios integrated AI-based software on the same smartphone used to acquire images. The sensitivity and specificity of the AI in detecting referable DR (moderate non-proliferative DR (NPDR) or worse disease) was compared to the gold standard diagnosis of the retina specialists.

Results : Results include analysis of images from 297 patients of which 176 (59.2%) had no DR, 35 (11.7%) had mild NPDR, 41 (13.8%) had moderate NPDR, and 33 (11.1%) had severe NPDR. In addition, 12 (4%) patients had PDR and 36 (20.4%) had macular edema. Sensitivity and specificity of the AI in detecting referable DR was 98.84% (95% confidence interval [CI], 97.62-100%) and 86.73% (95% CI, 82.87-90.59%), respectively. The area under the curve was 0.92. The sensitivity for vision-threatening DR (VTDR) was 100%.

Conclusion : The AI-based software had high sensitivity and specificity in detecting referable DR. Integration with the smartphone-based fundus camera with offline image grading has the potential for widespread applications in resource-poor settings.

Sosale Bhavana, Sosale Aravind R, Murthy Hemanth, Sengupta Sabyasachi, Naveenam Muralidhar

2020-Feb

Artificial intelligence, deep learning, diabetic retinopathy

General General

Machine learning and big scientific data.

In Philosophical transactions. Series A, Mathematical, physical, and engineering sciences ; h5-index 0.0

This paper reviews some of the challenges posed by the huge growth of experimental data generated by the new generation of large-scale experiments at UK national facilities at the Rutherford Appleton Laboratory (RAL) site at Harwell near Oxford. Such 'Big Scientific Data' comes from the Diamond Light Source and Electron Microscopy Facilities, the ISIS Neutron and Muon Facility and the UK's Central Laser Facility. Increasingly, scientists are now required to use advanced machine learning and other AI technologies both to automate parts of the data pipeline and to help find new scientific discoveries in the analysis of their data. For commercially important applications, such as object recognition, natural language processing and automatic translation, deep learning has made dramatic breakthroughs. Google's DeepMind has now used the deep learning technology to develop their AlphaFold tool to make predictions for protein folding. Remarkably, it has been able to achieve some spectacular results for this specific scientific problem. Can deep learning be similarly transformative for other scientific problems? After a brief review of some initial applications of machine learning at the RAL, we focus on challenges and opportunities for AI in advancing materials science. Finally, we discuss the importance of developing some realistic machine learning benchmarks using Big Scientific Data coming from several different scientific domains. We conclude with some initial examples of our 'scientific machine learning' benchmark suite and of the research challenges these benchmarks will enable. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

Hey Tony, Butler Keith, Jackson Sam, Thiyagalingam Jeyarajan

2020-Mar-06

AI benchmarks, atmospheric science, electron microscopy, image processing, machine learning, materials science

General General

Evolution of Machine Learning Algorithms in the Prediction and Design of Anticancer Peptides.

In Current protein & peptide science ; h5-index 0.0

Peptides act as promising anticancer agents due to their ease of synthesis and modifications, enhanced tumor penetration, and less systemic toxicity. However, only limited success has been achieved so far, as experimental design and synthesis of anticancer peptides (ACPs) are prohibitively costly and time-consuming. Furthermore, sequential increase in the protein sequence data via high-throughput sequencing makes it difficult to identify ACPs only through experimentation, that often involves months or years of speculation and failure. All these limitations could be conquered by applying machine learning (ML) approaches, which is a field of artificial intelligence that automates analytical model building for rapid and accurate outcome predictions. Recently, ML approaches hold great promise in the rapid discovery of ACPs, which could be witnessed by the growing number of ML-based anticancer prediction tools. In this review, we aim to provide a comprehensive view on the existing ML approaches for ACP predictions. Initially, we will briefly discuss the currently available ACP databases. This is followed by the main text, where state-of-the-art ML approaches working principles and their performances based on the ML algorithms are reviewed. Lastly, we discuss on the limitations and future directions of the ML methods in the prediction of ACPs.

Basith Mail Shaherin, Manavalan Balachandran, Shin Tae Hwan, Lee DaeYeon, Lee Gwang

2020-Jan-17

Cancer, anticancer peptides, machine learning, random forest, support vector machine

oncology Oncology

Progress in the development of antimicrobial peptide prediction tools.

In Current protein & peptide science ; h5-index 0.0

Antimicrobial peptides (AMPs) are natural polypeptides with antimicrobial activities and are found in most organisms. AMPs are evolutionarily conservative components that belong to the innate immune system and show potent activity against bacteria, fungi, viruses and in some cases display antitumor activity. Thus, AMPs are major candidates in the development of new antibacterial reagents. In the last few decades, AMPs have attracted significant attention from the research community. During the early stages of the development of this research field, AMPs were experimentally identified, which is an expensive and time-consuming procedure. Therefore, research and development (R&D) of fast, highly efficient computational tools for predicting AMPs has enabled the rapid identification and analysis of new AMPs from a wide range of organisms. Moreover, these computational tools have allowed researchers to better understand the activities of AMPs, which has promoted R&D of antibacterial drugs. In this review, we systematically summarize AMP prediction tools and their corresponding algorithms used.

Ao Chunyan, Zhang Yu, Li Dapeng, Zhao Yuming, Zou Quan

2020-Jan-17

Antimicrobial Peptides; Machine Learning; Support Vector Machine; Random Forest; Artificial Neural Network

General General

Comprehensive Review and Comparison for Anticancer Peptides Identification Models.

In Current protein & peptide science ; h5-index 0.0

Anticancer peptides (ACPs) eliminate pathogenic bacteria and kill tumor cells, showing no hemolysis and no damages to normal human cells. This unique ability explores the possibility of ACPs as therapeutic delivery and its potential applications in clinical therapy. Identifying ACPs is one of the most fundamental and central problems in new antitumor drug research. During the past decades, a number of machine learning-based prediction tools have been developed to solve this important task. However, the predictions produced by various tools are difficult to quantify and compare. Therefore, in this article, we provide a comprehensive review of existing machine learning methods for ACPs prediction and fair comparison of the predictors. To evaluate current prediction tools, we conducted a comparative study and analyzed the existing ACPs predictor from 10 public literatures. The comparative results obtained suggest that Support Vector Machine-based model with features combination provided significant improvement in the overall performance, when compared to the other machine learning method-based prediction models.

Song Xiao, Zhuang Yuanying, Lan Yihua, Lin Yinglai, Min Xiaoping

2020-Jan-17

anticancer peptides; machine learning; feature representation

General General

An Overview on Predicting Protein Subchloroplast Localization by using Machine Learning Methods.

In Current protein & peptide science ; h5-index 0.0

The chloroplast is a type of subcellular organelle of green plants and eukaryotic algae, which plays an important role in the photosynthesis process. Since the function of a protein correlates with its location, knowing its subchloroplast localization is helpful for elucidating its functions. However, due to the large number of chloroplast proteins, it is costly and time-consuming to design biological experiments to recognize subchloroplast localizations of these proteins. To address this problem, during the past ten years, twelve computational prediction methods have been developed to predict protein subchloroplast localization. This review summarizes the research progress in this area. We hope the review could provide important guide for further computational study on protein subchloroplast localization.

Liu Meng-Lu, Su Wei, Guan Zheng-Xing, Zhang Dan, Chen Wei, Liu Li, Ding Hui

2020-Jan-17

protein; subchloroplast localization; machine learning method; protein sequence properties; feature selection

General General

PredictMed: A logistic regression-based model to predict health conditions in cerebral palsy.

In Health informatics journal ; h5-index 25.0

Logistic regression-based predictive models are widely used in the healthcare field but just recently are used to predict comorbidities in children with cerebral palsy. This article presents a logistic regression approach to predict health conditions in children with cerebral palsy and a few examples from recent research. The model named PredictMed was trained, tested, and validated for predicting the development of scoliosis, intellectual disabilities, autistic features, and in the present study, feeding disorders needing gastrostomy. This was a multinational, cross-sectional descriptive study. Data of 130 children (aged 12-18 years) with cerebral palsy were collected between June 2005 and June 2015. The logistic regression-based model uses an algorithm implemented in R programming language. After splitting the patients in training and testing sets, logistic regressions are performed on every possible subset (tuple) of independent variables. The tuple that shows the best predictive performance in terms of accuracy, sensitivity, and specificity is chosen as a set of independent variables in another logistic regression to calculate the probability to develop the specific health condition (e.g. the need for gastrostomy). The average of accuracy, sensitivity, and specificity score was 90%. Our model represents a novelty in the field of some cerebral palsy-related health outcomes treatment, and it should significantly help doctors' decision-making process regarding patient prognosis.

Bertoncelli Carlo M, Altamura Paola, Vieira Edgar Ramos, Iyengar Sundaraja Sitharama, Solla Federico, Bertoncelli Domenico

2020-Jan-20

clinical decision-making, data mining, databases, decision-support systems, machine learning

Pathology Pathology

Determination of Causes of Death via Spectrochemical Analysis of Forensic autopsies-based Pulmonary Edema Fluid Samples with Deep Learning Algorithm.

In Journal of biophotonics ; h5-index 0.0

This study investigated whether infrared spectroscopy combined with a deep learning algorithm could be a useful tool for determining causes of death by analyzing pulmonary edema fluid from forensic autopsies. A newly designed convolutional neural network-based deep learning framework, named DeepIR, and 8 popular machine learning algorithms, were used to construct classifiers. The prediction performances of these classifiers demonstrated that DeepIR outperformed the machine learning algorithms in establishing classifiers to determine the causes of death. Moreover, DeepIR was generally less dependent on preprocessing procedures than were the machine learning algorithms; it provided the validation accuracy with a narrow range from 0.9661-0.9856 and the test accuracy ranging from 0.8774-0.9167 on the raw pulmonary edema fluid spectral dataset and the 9 preprocessing protocol-based datasets in our study. In conclusion, this study demonstrates deep learning-equipped Fourier Transform Infrared spectroscopy technique has the potential to be an effective aid for determining causes of death. This article is protected by copyright. All rights reserved.

Lin Hancheng, Luo Yiwen, Sun Qiran, Deng Kaifei, Chen Yijiu, Wang Zhenyuan, Huang Ping

2020-Jan-19

chemometrics, deep learning, forensic science, infrared spectroscopy, pulmonary edema fluid

Pathology Pathology

The cytopathologist's role in developing and evaluating artificial intelligence in cytopathology practice.

In Cytopathology : official journal of the British Society for Clinical Cytology ; h5-index 0.0

Artificial intelligence (AI) technologies have the potential to transform cytopathology practice and it is important for cytopathologists to embrace this and place themselves at the forefront of implementing these technologies in cytopathology. This review illustrates an archetypal AI workflow from project conception to implementation in a diagnostic setting and illustrates the cytopathologist's role and level of involvement at each stage of the process. Cytopathologists need to develop and maintain a basic understanding of AI, drive decisions regarding the development and implementation of AI in cytopathology, participate in the generation of datasets used to train and evaluate AI algorithms, understand how the performance of these algorithms is assessed, participate in the validation of these algorithms (either at a regulatory level or in the laboratory setting) and ensure continuous quality assurance of algorithms deployed in a diagnostic setting. In addition, cytopathologists should ensure that these algorithms are developed, trained, tested and deployed in an ethical manner. Cytopathologists need to become informed consumers of these AI algorithms by understanding their workings and limitations, how their performance is assessed and how to validate and verify their output in clinical practice.

McAlpine Ewen D, Michelow Pamela

2020-Jan-19

Artificial Intelligence, Cytopathology, Digital Pathology

General General

Identification of Psychoactive Metabolites from Cannabis sativa, Its Smoke, and Other Phytocannabinoids Using Machine Learning and Multivariate Methods.

In ACS omega ; h5-index 0.0

Cannabis sativa is a medicinal plant having a very complex matrix composed of mainly cannabinoids and terpenoids. The literature has numerous reports, which indicate that tetrahydrocannabinol (THC) is the only major psychoactive metabolite in C. sativa. It is important to explore other metabolites having the possibility of exhibiting the psychoactive character of various degrees and also to identify metabolites targeting other receptors such as opioid, γ amino butyric acid (GABA), glycine, serotonin, and nicotine present in C. sativa, the smoke of C. sativa, and other phytocannabinoid matrices. This article aims to achieve this goal by application of batteries of computational tools such as machine learning tools and multivariate methods on physiochemical and absorption, distribution, metabolism, excretion, and toxicity (ADMET) descriptors of 468 metabolites from C. sativa, its smoke and, other phytocannabinoids. The structure-activity relationship (SAR) showed that 54 metabolites from C. sativa have high scaffold homology with THC. Its implications on the route of administration and factors affecting the SAR are discussed. C. sativa smoke has metabolites that have possibility of interacting with GABA, and glycine receptors.

Jagannathan Ramesh

2020-Jan-14

General General

Toward point-of-care ultrasound estimation of fetal gestational age from the trans-cerebellar diameter using CNN-based ultrasound image analysis.

In Journal of medical imaging (Bellingham, Wash.) ; h5-index 0.0

Obstetric ultrasound is a fundamental ingredient of modern prenatal care with many applications including accurate dating of a pregnancy, identifying pregnancy-related complications, and diagnosis of fetal abnormalities. However, despite its many benefits, two factors currently prevent wide-scale uptake of this technology for point-of-care clinical decision-making in low- and middle-income country (LMIC) settings. First, there is a steep learning curve for scan proficiency, and second, there has been a lack of easy-to-use, affordable, and portable ultrasound devices. We introduce a framework toward addressing these barriers, enabled by recent advances in machine learning applied to medical imaging. The framework is designed to be realizable as a point-of-care ultrasound (POCUS) solution with an affordable wireless ultrasound probe, a smartphone or tablet, and automated machine-learning-based image processing. Specifically, we propose a machine-learning-based algorithm pipeline designed to automatically estimate the gestational age of a fetus from a short fetal ultrasound scan. We present proof-of-concept evaluation of accuracy of the key image analysis algorithms for automatic head transcerebellar plane detection, automatic transcerebellar diameter measurement, and estimation of gestational age on conventional ultrasound data simulating the POCUS task and discuss next steps toward translation via a first application on clinical ultrasound video from a low-cost ultrasound probe.

Maraci Mohammad A, Yaqub Mohammad, Craik Rachel, Beriwal Sridevi, Self Alice, von Dadelszen Peter, Papageorghiou Aris, Noble J Alison

2020-Jan

gestational age, global health, machine learning, point-of-care ultrasound, prenatal health

Ophthalmology Ophthalmology

Current applications of machine learning in the screening and diagnosis of glaucoma: a systematic review and Meta-analysis.

In International journal of ophthalmology ; h5-index 0.0

AIM : To compare the effectiveness of two well described machine learning modalities, ocular coherence tomography (OCT) and fundal photography, in terms of diagnostic accuracy in the screening and diagnosis of glaucoma.

METHODS : A systematic search of Embase and PubMed databases was undertaken up to 1st of February 2019. Articles were identified alongside their reference lists and relevant studies were aggregated. A Meta-analysis of diagnostic accuracy in terms of area under the receiver operating curve (AUROC) was performed. For the studies which did not report an AUROC, reported sensitivity and specificity values were combined to create a summary ROC curve which was included in the Meta-analysis.

RESULTS : A total of 23 studies were deemed suitable for inclusion in the Meta-analysis. This included 10 papers from the OCT cohort and 13 from the fundal photos cohort. Random effects Meta-analysis gave a pooled AUROC of 0.957 (95%CI=0.917 to 0.997) for fundal photos and 0.923 (95%CI=0.889 to 0.957) for the OCT cohort. The slightly higher accuracy of fundal photos methods is likely attributable to the much larger database of images used to train the models (59 788 vs 1743).

CONCLUSION : No demonstrable difference is shown between the diagnostic accuracy of the two modalities. The ease of access and lower cost associated with fundal photo acquisition make that the more appealing option in terms of screening on a global scale, however further studies need to be undertaken, owing largely to the poor study quality associated with the fundal photography cohort.

Murtagh Patrick, Greene Garrett, O’Brien Colm

2020

Meta-analysis, diagnosis, fundal photography, glaucoma, machine learning, ocular coherence tomography

General General

Model-Informed Artificial Intelligence: Reinforcement Learning for Precision Dosing.

In Clinical pharmacology and therapeutics ; h5-index 0.0

The availability of multi-dimensional data together with the development of modern techniques for data analysis represent an exceptional opportunity for clinical pharmacology. Data science - defined in this special issue as the novel approaches to the collection, aggregation and analysis of data - can significantly contribute to characterize drug-response variability at the individual level thus enabling clinical pharmacology to become a critical contributor to personalized healthcare through precision dosing. We propose a mini-review of methodologies for achieving precision dosing with a focus on an artificial intelligence technique called reinforcement learning which is currently used for individualizing dosing regimen in patients with life-threatening diseases. We highlight the interplay of such techniques with conventional pharmacokinetic/pharmacodynamics approaches and discuss applicability in drug research and early development.

Ribba Benjamin, Dudal Sherri, Lavé Thierry, Peck Richard W

2020-Jan-19

General General

An application of machine learning in pharmacovigilance: estimating likely patient genotype from phenotypical manifestations of fluoropyrimidine toxicity.

In Clinical pharmacology and therapeutics ; h5-index 0.0

Dihydropyrimidine dehydrogenase (DPD) deficient patients might only become aware of their genotype after exposure to dihydropyrimidines, if testing is performed. Case reports to pharmacovigilance databases might only contain phenotypical manifestations of DPD, without information on the genotype. This poses a difficulty to estimating the cases due to DPD. Auto machine learning models were developed to train patterns of phenotypical manifestations of toxicity, which were then used as a surrogate to estimate the number of cases of DPD related toxicity. Results indicate that between 8,878 (7.0%) to 16,549 (13.1%) patients have a profile similar to DPD deficient status. Feature importance matches the known end-organ damage of DPD related toxicity, however accuracies in the range of 90% suggest presence of overfitting, and thus results need to be interpreted carefully. This study shows the potential for use of machine learning in the regulatory context but additional studies are required.

Pinheiro Luis Correia, Durand Julie, Dogné Jean-Michel

2020-Jan-19

Adverse Drug Reactions, Adverse Events, Bioinformatics, Estimation methods, Pharmacovigilance

Surgery Surgery

Mitral valve flattening and parameter mapping for patient-specific valve diagnosis.

In International journal of computer assisted radiology and surgery ; h5-index 0.0

PURPOSE : Intensive planning and analysis from echocardiography are a crucial step before reconstructive surgeries are applied to malfunctioning mitral valves. Volume visualizations of echocardiographic data are often used in clinical routine. However, they lack a clear visualization of the crucial factors for decision making.

METHODS : We build upon patient-specific mitral valve surface models segmented from echocardiography that represent the valve's geometry, but suffer from self-occlusions due to complex 3D shape. We transfer these to 2D maps by unfolding their geometry, resulting in a novel 2D representation that maintains anatomical resemblance to the 3D geometry. It can be visualized together with color mappings and presented to physicians to diagnose the pathology in one gaze without the need for further scene interaction. Furthermore, it facilitates the computation of a Pathology Score, which can be used for diagnosis support.

RESULTS : Quality and effectiveness of the proposed methods were evaluated through a user survey conducted with domain experts. We assessed pathology detection accuracy using 3D valve models in comparison with the novel visualizations. Classification accuracy increased by 5.3% across all tested valves and by 10.0% for prolapsed valves. Further, the participants' understanding of the relation between 3D and 2D views was evaluated. The Pathology Score is found to have potential to support discriminating pathologic valves from normal valves.

CONCLUSIONS : In summary, our survey shows that pathology detection can be improved in comparison with simple 3D surface visualizations of the mitral valve. The correspondence between the 2D and 3D representations is comprehensible, and color-coded pathophysiological magnitudes further support the clinical assessment.

Lichtenberg Nils, Eulzer Pepe, Romano Gabriele, Brčić Andreas, Karck Matthias, Lawonn Kai, De Simone Raffaele, Engelhardt Sandy

2020-Jan-18

Mitral valve, Paramterization, Quantification, Visualization

General General

Oxytocin effects on the resting-state mentalizing brain network.

In Brain imaging and behavior ; h5-index 0.0

Oxytocin (OT) has modulatory effects in both human behavior and in the brain, which is not limited in the specific brain area but also with the potential effect on connectivity with other brain regions. Evidence indicates that OT effects on human behavior are multifaceted, such as trust behavior, decrease anxiety, empathy and bonding behavior. For the vital role of mentalizing in understanding others, here we examine whether OT has a general effect on mentalizing brain network which is associated to the effect of related social behavioral and personality traits. Using a randomized, double-blind placebo-controlled group design, we investigate the resting-state functional magnetic resonance imaging after intranasal OT or placebo. The functional connectivity (FC) maps with seed in left/right temporoparietal junction (lTPJ/rTPJ) showed that OT significantly increased connectivity between rTPJ and default attention network (DAN), but decreased the FC between lTPJ and medial prefrontal network (MPN). With machine learning approach, we report that identified altered FCs of TPJ can classify OT and placebo (PL) group. Moreover, individual's empathy trait can modulate the FC between left TPJ and right rectus (RECT), which shows a positive correlation with empathic concern in PL group but a negative correlation in OT group. These results demonstrate that OT has significant effect on FC with lTPJ and rTPJ, brain regions where are critical for mentalizing, and the empathy concern can modulate the FC. These findings advance our understanding of the neural mechanisms by which OT modulates social behaviors, especially in social interaction involving mentalizing.

Wu Haiyan, Feng Chunliang, Lu Xiaping, Liu Xun, Liu Quanying

2020-Jan-18

Functional connectivity, Mentalizing network, Oxytocin, Temporoparietal junction, fMRI

General General

A review of thyroid gland segmentation and thyroid nodule segmentation methods for medical ultrasound images.

In Computer methods and programs in biomedicine ; h5-index 0.0

Background and objective Thyroid image segmentation is an indispensable part in computer-aided diagnosis systems and medical image diagnoses of thyroid diseases. There have been dozens of studies on thyroid gland segmentation and thyroid nodule segmentation in ultrasound images. The aim of this work is to categorize and review the thyroid gland segmentation and thyroid nodule segmentation methods in medical ultrasound. Methods This work proposes a categorization approach of thyroid gland segmentation and thyroid nodule segmentation methods according to the theoretical bases of segmentation methods. The segmentation methods are categorized into four groups, including contour and shape based methods, region based methods, machine and deep learning methods and hybrid methods. The representative articles are reviewed with detailed descriptions of methods and analyses of correlations between methods. The evaluation metrics for the reviewed segmentation methods are named uniformly in this work. The segmentation performance results using the uniformly named evaluation metrics are compared. Results After careful investigation, 28 representative papers are selected for comprehensive analyses and comparisons in this review. The dominant thyroid gland segmentation methods are machine and deep learning methods. The training of massive data makes these models have better segmentation performance and robustness. But deep learning models usually require plenty of marked training data and long training time. For thyroid nodule segmentation, the most common methods are contour and shape based methods, which have good segmentation performance. However, most of them are tested on small datasets. Conclusions Based on the comprehensive consideration of application scenario, image features, method practicability and segmentation performance, the appropriate segmentation method for specific situation can be selected. Furthermore, several limitations of current thyroid ultrasound image segmentation methods are presented, which may be overcome in future studies, such as the segmentation of pathological or abnormal thyroid glands, identification of the specific nodular diseases, and the standard thyroid ultrasound image datasets.

Chen Junying, You Haijun, Li Kai

2020-Jan-09

Gland segmentation method, Nodule segmentation method, Segmentation performance analysis, Thyroid ultrasound image

General General

Computational approaches for detection of cardiac rhythm abnormalities: Are we there yet?

In Journal of electrocardiology ; h5-index 0.0

The analysis of an electrocardiogram (ECG) is able to provide vital information on the electrical activity of the heart and is crucial for the accurate diagnosis of cardiac arrhythmias. Due to the nature of some arrhythmias, this might be a time-consuming and difficult to accomplish process. The advent of novel machine learning technologies in this field has a potential to revolutionise the use of the ECG. In this review, we outline key advances in ECG analysis for atrial, ventricular and complex multiform arrhythmias, as well as discuss the current limitations of the technology and the barriers that must be overcome before clinical integration is feasible.

Zhang Kevin, Aleexenko Vadim, Jeevaratnam Kamalan

2019-Dec-17

Algorithms, Arrhythmia, Artificial intelligence, Atrial fibrillation, Computational, Electrocardiogram, Risk prediction

General General

How much are we exposed to alcohol in electronic media? Development of the Alcoholic Beverage Identification Deep Learning Algorithm (ABIDLA).

In Drug and alcohol dependence ; h5-index 64.0

BACKGROUND : Evidence demonstrates that seeing alcoholic beverages in electronic media increases alcohol initiation and frequent and excessive drinking, particularly among young people. To efficiently assess this exposure, the aim was to develop the Alcoholic Beverage Identification Deep Learning Algorithm (ABIDLA) to automatically identify beer, wine and champagne/sparkling wine from images.

METHODS : Using a specifically developed software, three coders annotated 57,186 images downloaded from Google. Supplemented by 10,000 images from ImageNet, images were split randomly into training data (70 %), validation data (10 %) and testing data (20 %). For retest reliability, a fourth coder re-annotated a random subset of 2004 images. Algorithms were trained using two state-of-the-art convolutional neural networks, Resnet (with different depths) and Densenet-121.

RESULTS : With a correct classification (accuracy) of 73.75 % when using six beverage categories (beer glass, beer bottle, beer can, wine, champagne, and other images), 84.09 % with three (beer, wine/champagne, others) and 85.22 % with two (beer/wine/champagne, others), Densenet-121 slightly outperformed all Resnet models. The highest accuracy was obtained for wine (78.91 %) followed by beer can (77.43 %) and beer cup (73.56 %). Interrater reliability was almost perfect between the coders and the expert (Kappa = .903) and substantial between Densenet-121 and the coders (Kappa = .681).

CONCLUSIONS : Free from any response or coding burden and with a relatively high accuracy, the ABIDLA offers the possibility to screen all kinds of electronic media for images of alcohol. Providing more comprehensive evidence on exposure to alcoholic beverages is important because exposure instigates alcohol initiation and frequent and excessive drinking.

Kuntsche Emmanuel, Bonela Abraham Albert, Caluzzi Gabriel, Miller Mia, He Zhen

2020-Jan-09

Alcohol exposure, Beverage recognition, Deep learning algorithms

General General

Prediction of peptide binding to MHC using machine learning with sequence and structure-based feature sets.

In Biochimica et biophysica acta. General subjects ; h5-index 0.0

Selecting peptides that bind strongly to the major histocompatibility complex (MHC) for inclusion in a vaccine has therapeutic potential for infections and tumors. Machine learning models trained on sequence data exist for peptide:MHC (p:MHC) binding predictions. Here, we train support vector machine classifier (SVMC) models on physicochemical sequence-based and structure-based descriptor sets to predict peptide binding to a well-studied model mouse MHC I allele, H-2Db. Recursive feature elimination and two-way forward feature selection were also performed. Although low on sensitivity compared to the current state-of-the-art algorithms, models based on physicochemical descriptor sets achieve specificity and precision comparable to the most popular sequence-based algorithms. The best-performing model is a hybrid descriptor set containing both sequence-based and structure-based descriptors. Interestingly, close to half of the physicochemical sequence-based descriptors remaining in the hybrid model were properties of the anchor positions, residues 5 and 9 in the peptide sequence. In contrast, residues flanking position 5 make little to no residue-specific contribution to the binding affinity prediction. The results suggest that machine-learned models incorporating both sequence-based descriptors and structural data may provide information on specific physicochemical properties determining binding affinities.

Aranha Michelle P, Spooner Catherine, Demerdash Omar, Czejdo Bogdan, Smith Jeremy C, Mitchell Julie C

2020-Jan-16

Binding affinity, MHC-peptide, Machine learning

General General

From Summary Statistics to Gene Trees: Methods for Inferring Positive Selection.

In Trends in genetics : TIG ; h5-index 0.0

Methods to detect signals of natural selection from genomic data have traditionally emphasized the use of simple summary statistics. Here, we review a new generation of methods that consider combinations of conventional summary statistics and/or richer features derived from inferred gene trees and ancestral recombination graphs (ARGs). We also review recent advances in methods for population genetic simulation and ARG reconstruction. Finally, we describe opportunities for future work on a variety of related topics, including the genetics of speciation, estimation of selection coefficients, and inference of selection on polygenic traits. Together, these emerging methods offer promising new directions in the study of natural selection.

Hejase Hussein A, Dukler Noah, Siepel Adam

2020-Jan-15

ancestral recombination graph, machine learning, simulation

General General

Discovering hidden information in biosignals from patients by artificial intelligence.

In Korean journal of anesthesiology ; h5-index 0.0

Biosignals like electrocardiogram or photoplethysmogram have been widely used for monitoring and determining status of patients. However, it has been recently discovered that more information than that we have used traditionally were included in the biosignals after artificial intelligence (AI) was applied. Most meaningful advancement of current AI was in deep learning. The deep learning-based models show the best performance in most area in current due to the distinguished characteristic that it is able to extract important features from raw data. For that, deep learning extracts features in data by itself without feature engineering by human, if amount of data is enough for that. These AI-enabled feature give us opportunities to have a chance to see novel information which was hidden for many decades. It will be able to be used as digital biomarker for detecting or for predicting clinical outcome or event without further or more invasive evaluation. However, because the characteristics of deep learning is black box model, it is difficult to understand to use if users have the traditional view on the biosignals. For properly use of the novel information which is being discovered by AI and for adopting that in real clinical practice, clinicians need to basic knowledge on the AI and machine learning. This review covers from the basis of AI and machine learning for clinicians and its feasibilities in real practice within near future.

Yoon Dukyong, Jang Jong-Hwan, Choi Byung Jin, Kim Tae Young, Han Chang Ho

2020-Jan-16

Artificial intelligence, Biomarker, Computer-Assisted Diagnosis, Deep Learning, Electrocardiography, Electrodiagnosis

Pathology Pathology

DNA-Methylation-based Classification of Paediatric Brain Tumours.

In Neuropathology and applied neurobiology ; h5-index 39.0

DNA-methylation based machine learning algorithms represent powerful diagnostic tools that are currently emerging for several fields of tumour classification. For various reasons, paediatric brain tumours have been the main driving forces behind this rapid development and brain tumour classification tools are likely further advanced than in any other field of cancer diagnostics. In this review, we will discuss the main characteristics that were important for this rapid advance, namely the high clinical need for improvement of paediatric brain tumour diagnostics, the robustness of methylated DNA and the consequential possibility to generate high quality molecular data from archival formalin-fixed paraffin-embedded pathology specimens, the implementation of a single array platform by most laboratories allowing data exchange and data pooling to an unprecedented extent, as well as the high suitability of the data format for machine learning. We will further discuss the four most central output qualities of DNA methylation profiling in a diagnostic setting (tumour classification, tumour sub-classification, copy number analysis and guidance for additional molecular testing) individually for the most frequent types of paediatric brain tumours. Lastly, we will discuss DNA methylation profiling as a tool for the detection of new paediatric brain tumour classes and will give an overview of the rapidly growing family of new tumours identified with the aid of this technique.

Perez Eilís, Capper David

2020-Jan-19

oncology Oncology

An Artificial Neural Network to model Response of a Radiotherapy Beam Monitoring System.

In Medical physics ; h5-index 59.0

PURPOSE : The Integral Quality Monitor (IQM) is a real-time radiotherapy beam monitoring system, which consists of a spatially sensitive large-area ion chamber, mounted at the collimator of the linear accelerator (linac), and a calculation algorithm to predict the detector signal for each beam segment. By comparing the measured and predicted signals the system validates the beam delivery. The current commercial version of IQM uses an analytic method to predict the signal, which requires a semi-empirical approach to determine and optimize various calculation parameters. The process of developing the calculation model is complex and time consuming, and moreover, the model cannot be easily generalized across various beam delivery platforms with different combinations of beam energy, beam flattening, beam shaping elements, and Linac models. Therefore, as an alternative solution, we investigated the feasibility of developing a Machine Learning (ML) method, using an Artificial Neural Network (ANN), to predict the ion chamber signal. In developing an ANN, it is not necessary to explicitly account for each of the elements of beam interactions with various structures in the beam path to the ion chamber.

METHODS : The ANN was designed with Multi-Layer Perceptron (MLP). The input layer consisted of multiple features, derived from the geometrical characteristics of beam segments. Gradient descent error backpropagation technique was used to train the ANN. The combined training dataset included 270 rectangular fields, and 801 clinical IMRT fields delivered using 6 MV beams on Varian TrueBeamTM and Elekta InfinityTM . Each of 12 different ANN configurations (3 different sets of input features x 4 different sets of number of hidden nodes) was simulated 10 times with randomly selected 80% of data for training and the remaining data for validation.

RESULTS : ANNs with one hidden layer, consisting of 10 nodes, and 10 input features provided optimum results. Once the feature sets were extracted, the time required for the network training was on the order of a few minutes, and the time required to perform an output calculation per field was only fraction of a second. More than 95% of clinical IMRT segments were calculated within ±3.0% modelling error for Varian Truebeam (90% and ±3.3% for Elekta Infinity). A total of 3320 VMAT segments from Truebeam were calculated using the ANN trained with IMRT fields. More than 95% of the cumulative VMAT beam segments were within 3.6% modelling error, similar to the performance for IMRT segments. In general the modeling error was found to be inversely proportional to the size and intensity of the beam segment.

CONCLUSION : A prototype ANN has been developed for predicting the signals of the IQM system, with substantially less efforts compared to the analytic model. The performance of the ANN was found to be at least equivalent to that of the analytic method, in terms of average and maximum error, for 6 MV beams on both Varian TrueBeam and Elekta Infinity platforms.

Cho Young-Bin, Farrokhkish Makan, Norrlinger Bern, Heaton Robert, Jaffray David, Islam Mohammad

2020-Jan-19

Artificial Neural Network, Artificial intelligence in numerical model, Integral Quality Monitoring (IQM) system, Machine learning for quality assurance, Radiation therapy

General General

Fully Automated Segmentation of Left Ventricular Scar from 3D Late Gadolinium Enhancement Magnetic Resonance Imaging Using a Cascaded Multi-Planar U-Net (CMPU-Net).

In Medical physics ; h5-index 59.0

PURPOSE : 3D late gadolinium enhancement magnetic resonance (LGE-MR) imaging enables the quantification of myocardial scar at high resolution with unprecedented volumetric visualization. Automated segmentation of myocardial scar is critical for potential clinical translation of this technique given the number of tomographic images acquired.

METHODS : In this paper, we describe the development of cascaded multi-planar U-Net (CMPU-Net) to efficiently segment the boundary of the left ventricle (LV) myocardium and scar from 3D LGE-MR images. In this approach, two subnets, each containing three U-Nets, were cascaded to first segment the LV myocardium and then segment the scar within the pre-segmented LV myocardium. The U-Nets were trained separately using 2D slices extracted from axial, sagittal, and coronal slices of 3D LGE-MR images. We used 3D LGE-MR images from 34 subjects with chronic ischemic cardiomyopathy. The U-Nets were trained using 8430 slices, extracted in three orthogonal directions from 18 images. In the testing phase, the outputs of U-Nets of each subnet were combined using majority voting system for final label prediction of each voxel in the image. The developed method was tested for accuracy by comparing its results to manual segmentations of LV myocardium and LV scar from 7250 slices extracted from 16 3D LGE-MR images. Our method was also compared to numerous alternative methods based on machine learning, energy minimization, and intensity-thresholds.

RESULTS : Our algorithm reported a mean Dice similarity coefficient (DSC), absolute volume difference (AVD), and Hausdorff distance (HD) of 85.14% ± 3.36%, 43.72 ± 27.18 cm3 , and 19.21 ± 4.74 mm for determining the boundaries of LV myocardium from LGE-MR images. Our method also yielded a mean DSC, AVD, and HD of 88.61% ± 2.54%, 9.33 ± 7.24 cm3 , and 17.04 ± 9.93 mm for LV scar segmentation on the unobserved test dataset. Our method significantly outperformed the alternative techniques in segmentation accuracy (p-value<0.05).

CONCLUSION : The CMPU-Net method provided fully automated segmentation of LV scar from 3D LGE-MR images and outperformed the alternative techniques.

Zabihollahy Fatemeh, Rajchl Martin, White A James, Ukwatta Eranga

2020-Jan-18

Late gadolinium enhancement magnetic resonance imaging, U-Net, convolutional neural network, left ventricle myocardium, left ventricular scar

General General

Loop expansion around the Bethe solution for the random magnetic field Ising ferromagnets at zero temperature.

In Proceedings of the National Academy of Sciences of the United States of America ; h5-index 0.0

We apply to the random-field Ising model at zero temperature ([Formula: see text]) the perturbative loop expansion around the Bethe solution. A comparison with the standard ϵ expansion is made, highlighting the key differences that make the expansion around the Bethe solution much more appropriate to correctly describe strongly disordered systems, especially those controlled by a [Formula: see text] renormalization group (RG) fixed point. The latter loop expansion produces an effective theory with cubic vertices. We compute the one-loop corrections due to cubic vertices, finding additional terms that are absent in the ϵ expansion. However, these additional terms are subdominant with respect to the standard, supersymmetric ones; therefore, dimensional reduction is still valid at this order of the loop expansion.

Angelini Maria Chiara, Lucibello Carlo, Parisi Giorgio, Ricci-Tersenghi Federico, Rizzo Tommaso

2020-Jan-17

Bethe lattices, Ising model, critical exponents, disordered systems, perturbative expansion

General General

Application of machine learning to predict monomer retention of therapeutic proteins after long term storage.

In International journal of pharmaceutics ; h5-index 67.0

An important aspect of initial developability assessments as well formulation development and selection of therapeutic proteins is the evaluation of data obtained under accelerated stress condition, i.e. at elevated temperatures. We propose the application of artificial neural networks (ANNs) to predict long term stability in real storage condition from accelerated stability studies and other high-throughput biophysical properties e.g. the first apparent temperature of unfolding (Tm). Our models have been trained on therapeutic relevant proteins, including monoclonal antibodies, in various pharmaceutically relevant formulations. Further, we developed networks architectures with good prediction power using the least amount of input features, i.e. experimental effort to train the network. This provides an empiric means to highlight the most important parameters in the prediction of real-time protein stability. Further, several models were developed by a different validation means (i.e. leave-one-protein-out cross-validation) to test the robustness and the limitations of our approach. Finally, we apply surrogate machine learning algorithms (e.g. linear regression) to build trust in the ANNs decision making procedure and to highlight the connection between the leading inputs and the outputs.

Gentiluomo Lorenzo, Roessner Dierk, Frieß Wolfgang

2020-Jan-14

artificial neural network, biopharmaceutics, drug development, machine learning, protein aggregation, protein formulation, protein stability

oncology Oncology

Dynamic readmission prediction using routine postoperative laboratory results after radical cystectomy.

In Urologic oncology ; h5-index 0.0

OBJECTIVE : To determine if the addition of electronic health record data enables better risk stratification and readmission prediction after radical cystectomy. Despite efforts to reduce their frequency and severity, complications and readmissions following radical cystectomy remain common. Leveraging readily available, dynamic information such as laboratory results may allow for improved prediction and targeted interventions for patients at risk of readmission.

METHODS : We used an institutional electronic medical records database to obtain demographic, clinical, and laboratory data for patients undergoing radical cystectomy. We characterized the trajectory of common postoperative laboratory values during the index hospital stay using support vector machine learning techniques. We compared models with and without laboratory results to assess predictive ability for readmission.

RESULTS : Among 996 patients who underwent radical cystectomy, 259 patients (26%) experienced a readmission within 30 days. During the first week after surgery, median daily values for white blood cell count, urea nitrogen, bicarbonate, and creatinine differentiated readmitted and nonreadmitted patients. Inclusion of laboratory results greatly increased the ability of models to predict 30-day readmissions after cystectomy.

CONCLUSIONS : Common postoperative laboratory values may have discriminatory power to help identify patients at higher risk of readmission after radical cystectomy. Dynamic sources of physiological data such as laboratory values could enable more accurate identification and targeting of patients at greatest readmission risk after cystectomy. This is a proof of concept study that suggests further exploration of these techniques is warranted.

Kirk Peter S, Liu Xiang, Borza Tudor, Li Benjamin Y, Sessine Michael, Zhu Kevin, Lesse Opal, Qin Yongmei, Jacobs Bruce, Urish Ken, Helm Jonathan, Gilbert Scott, Weizer Alon, Montgomery Jeffrey, Hollenbeck Brent K, Lavieri Mariel, Skolarus Ted A

2020-Jan-14

Cystectomy, Electronic health records, Patient readmission, Support vector machine

General General

Resting-state EEG reveals global network deficiency in dyslexic children.

In Neuropsychologia ; h5-index 0.0

Developmental dyslexia is known to involve dysfunctions in multiple brain regions; however, a clear understanding of the brain networks behind this disorder is still lacking. The present study examined the functional network connectivity in Chinese dyslexic children with resting-state electroencephalography (EEG) recordings. EEG data were recorded from 27 dyslexic children and 40 age-matched controls, and a minimum spanning tree (MST) analysis was performed to examine the network connectivity in the delta, theta, alpha, and beta frequency bands. The results show that, compared to age-matched controls, Chinese dyslexic children had global network deficiencies in the beta band, and the network topology was more path-like. Moderate correlations are observed between MST degree metric and rapid automatized naming and morphological awareness tests. These observations, together with the findings in alphabetic languages, show that brain network deficiency is a common neural underpinning of dyslexia across writing systems.

Xue Huidong, Wang Zhiguo, Tan Yufei, Yang Hang, Fu Wanlu, Xue Licheng, Zhao Jing

2020-Jan-14

Developmental dyslexia, Functional network connectivity, Graph theory, Minimum spanning tree (MST), Resting-state electroencephalography (EEG)

General General

Prediction and targeting of GPCR oligomer interfaces.

In Progress in molecular biology and translational science ; h5-index 0.0

GPCR oligomerization has emerged as a hot topic in the GPCR field in the last years. Receptors that are part of these oligomers can influence each other's function, although it is not yet entirely understood how these interactions work. The existence of such a highly complex network of interactions between GPCRs generates the possibility of alternative targets for new therapeutic approaches. However, challenges still exist in the characterization of these complexes, especially at the interface level. Different experimental approaches, such as FRET or BRET, are usually combined to study GPCR oligomer interactions. Computational methods have been applied as a useful tool for retrieving information from GPCR sequences and the few X-ray-resolved oligomeric structures that are accessible, as well as for predicting new and trustworthy GPCR oligomeric interfaces. Machine-learning (ML) approaches have recently helped with some hindrances of other methods. By joining and evaluating multiple structure-, sequence- and co-evolution-based features on the same algorithm, it is possible to dilute the issues of particular structures and residues that arise from the experimental methodology into all-encompassing algorithms capable of accurately predict GPCR-GPCR interfaces. All these methods used as a single or a combined approach provide useful information about GPCR oligomerization and its role in GPCR function and dynamics. Altogether, we present experimental, computational and machine-learning methods used to study oligomers interfaces, as well as strategies that have been used to target these dynamic complexes.

Barreto Carlos A V, Baptista Salete J, Preto António José, Matos-Filipe Pedro, Mourão Joana, Melo Rita, Moreira Irina

2020

Co-evolution, Dimerization, GPCRs, Hot spots, Interface prediction, Interface targeting, Machine learning, Sequence-based models, Structure-based models

oncology Oncology

Digital governance in tax-funded European healthcare systems: from the Back office to patient empowerment.

In Israel journal of health policy research ; h5-index 0.0

Digital healthcare promises to achieve cost-efficiency gains, improve clinical effectiveness, support better public sector governance by enhancing transparency and accountability, and increase confidence in medical diagnoses, especially in the field of oncology. This article aims to discuss the benefits offered by digital technologies in tax-based European healthcare systems against the backdrop of structural bureaucratic rigidities and a slow pace of implementation.Artificial intelligence (AI) will transform the existing delivery of healthcare services, inducing a redesign of public accountability systems and the traditional relationships between professionals and patients. Despite legitimate ethical and accountability concerns, which call for clearer guidance and regulation, digital governance of healthcare is a powerful means of empowering patients and improving their medical treatment in terms of quality and effectiveness. On the path to better health, the use of digital technologies has moved beyond the back office of administrative processes and procedures, and is now being applied to clinical activities and direct patient engagement.

Mattei Paola

2020-Jan-17

Accountability, Artificial intelligence, Digital health, Health care organizations, Patients’ engagement, Tax-funded health care systems

Public Health Public Health

Using cluster analysis to reconstruct dengue exposure patterns from cross-sectional serological studies in Singapore.

In Parasites & vectors ; h5-index 57.0

BACKGROUND : Dengue is a mosquito-borne viral disease caused by one of four serotypes (DENV1-4). Infection provides long-term homologous immunity against reinfection with the same serotype. Plaque reduction neutralization test (PRNT) is the gold standard to assess serotype-specific antibody levels. We analysed serotype-specific antibody levels obtained by PRNT in two serological surveys conducted in Singapore in 2009 and 2013 using cluster analysis, a machine learning technique that was used to identify the most common histories of DENV exposure.

METHODS : We explored the use of five distinct clustering methods (i.e. agglomerative hierarchical, divisive hierarchical, K-means, K-medoids and model-based clustering) with varying number (from 4 to 10) of clusters for each method. Weighted rank aggregation, an evaluating technique for a set of internal validity metrics, was adopted to determine the optimal algorithm, comprising the optimal clustering method and the optimal number of clusters.

RESULTS : The K-means algorithm with six clusters was selected as the algorithm with the highest weighted rank aggregation. The six clusters were characterised by (i) dominant DENV2 PRNT titres; (ii) co-dominant DENV1 and DENV2 titres with average DENV2 titre > average DENV1 titre; (iii) co-dominant DENV1 and DENV2 titres with average DENV1 titre > average DENV2 titre; (iv) low PRNT titres against DENV1-4; (v) intermediate PRNT titres against DENV1-4; and (vi) dominant DENV1-3 titres. Analyses of the relative size and age-stratification of the clusters by year of sample collection and the application of cluster analysis to the 2009 and 2013 datasets considered separately revealed the epidemic circulation of DENV2 and DENV3 between 2009 and 2013.

CONCLUSION : Cluster analysis is an unsupervised machine learning technique that can be applied to analyse PRNT antibody titres (without pre-established cut-off thresholds to indicate protection) to explore common patterns of DENV infection and infer the likely history of dengue exposure in a population.

Sangkaew Sorawat, Tan Li Kiang, Ng Lee Ching, Ferguson Neil M, Dorigatti Ilaria

2020-Jan-17

Cluster analysis, Dengue exposures, Serological survey

General General

Neurophysiological Vigilance Characterisation and Assessment: Laboratory and Realistic Validations Involving Professional Air Traffic Controllers.

In Brain sciences ; h5-index 0.0

Vigilance degradation usually causes significant performance decrement. It is also considered the major factor causing the out-of-the-loop phenomenon (OOTL) occurrence. OOTL is strongly related to a high level of automation in operative contexts such as the Air Traffic Management (ATM), and it could lead to a negative impact on the Air Traffic Controllers' (ATCOs) engagement. As a consequence, being able to monitor the ATCOs' vigilance would be very important to prevent risky situations. In this context, the present study aimed to characterise and assess the vigilance level by using electroencephalographic (EEG) measures. The first study, involving 13 participants in laboratory settings allowed to find out the neurophysiological features mostly related to vigilance decrements. Those results were also confirmed under realistic ATM settings recruiting 10 professional ATCOs. The results demonstrated that (i) there was a significant performance decrement related to vigilance reduction; (ii) there were no substantial differences between the identified neurophysiological features in controlled and ecological settings, and the EEG-channel configuration defined in laboratory was able to discriminate and classify vigilance changes in ATCOs' vigilance with high accuracy (up to 84%); (iii) the derived two EEG-channel configuration was able to assess vigilance variations reporting only slight accuracy reduction.

Sebastiani Marika, Di Flumeri Gianluca, Aricò Pietro, Sciaraffa Nicolina, Babiloni Fabio, Borghini Gianluca

2020-Jan-15

ATM, air traffic controllers, high-resolution EEG, machine learning, mental states assessment, out-of-the-loop, psychomotor vigilance task, stepwise linear discriminant analysis, vigilance

General General

On Assessing Driver Awareness of Situational Criticalities: Multi-modal Bio-Sensing and Vision-Based Analysis, Evaluations, and Insights.

In Brain sciences ; h5-index 0.0

Automobiles for our roadways are increasingly using advanced driver assistance systems. The adoption of such new technologies requires us to develop novel perception systems not only for accurately understanding the situational context of these vehicles, but also to infer the driver's awareness in differentiating between safe and critical situations. This manuscript focuses on the specific problem of inferring driver awareness in the context of attention analysis and hazardous incident activity. Even after the development of wearable and compact multi-modal bio-sensing systems in recent years, their application in driver awareness context has been scarcely explored. The capability of simultaneously recording different kinds of bio-sensing data in addition to traditionally employed computer vision systems provides exciting opportunities to explore the limitations of these sensor modalities. In this work, we explore the applications of three different bio-sensing modalities namely electroencephalogram (EEG), photoplethysmogram (PPG) and galvanic skin response (GSR) along with a camera-based vision system in driver awareness context. We assess the information from these sensors independently and together using both signal processing- and deep learning-based tools. We show that our methods outperform previously reported studies to classify driver attention and detecting hazardous/non-hazardous situations for short time scales of two seconds. We use EEG and vision data for high resolution temporal classification (two seconds) while additionally also employing PPG and GSR over longer time periods. We evaluate our methods by collecting user data on twelve subjects for two real-world driving datasets among which one is publicly available (KITTI dataset) while the other was collected by us (LISA dataset) with the vehicle being driven in an autonomous mode. This work presents an exhaustive evaluation of multiple sensor modalities on two different datasets for attention monitoring and hazardous events classification.

Siddharth Siddharth, Trivedi Mohan M

2020-Jan-15

EEG, GSR, PPG, bio-sensing, brain-computer interfaces, deep learning, human-machine interaction

Radiology Radiology

Retraining an open-source pneumothorax detecting machine learning algorithm for improved performance to medical images.

In Clinical imaging ; h5-index 0.0

PURPOSE : To validate a machine learning model trained on an open source dataset and subsequently optimize it to chest X-rays with large pneumothoraces from our institution.

METHODS : The study was retrospective in nature. The open-source chest X-ray (CXR8) dataset was dichotomized to cases with pneumothorax (PTX) and all other cases (non-PTX), resulting in 41,946 non-PTX and 4696 PTX cases for the training set and 11,120 non-PTX and 541 PTX cases for the validation set. A limited supervision machine learning model was constructed to incorporate both localized and unlocalized pathology. Cases were then queried from our health system from 2013 to 2017. A total of 159 pneumothorax and 682 non-pneumothorax cases were available for the training set. For the validation set, 48 pneumothorax and 1287 non-pneumothorax cases were available. The model was trained, a receiver operator curve (ROC) was created, and output metrics, including area under the curve (AUC), sensitivity and specificity were calculated.

RESULTS : Initial training of the model using the CXR8 dataset resulted in an AUC of 0.90 for pneumothorax detection. Naively inferring our own validation dataset on the CXR8 trained model output an AUC of 0.59. After re-training the model with our own training dataset, the validation dataset inference output an AUC of 0.90.

CONCLUSION : Our study showed that even though you may get great results on open-source datasets, those models may not translate well to real world data without an intervening retraining process.

Kitamura Gene, Deible Christopher

2020-Jan-08

Artificial intelligence, Chest X-ray, Machine learning, Neural network, Pneumothorax

General General

Automated Parkinson's disease recognition based on statistical pooling method using acoustic features.

In Medical hypotheses ; h5-index 0.0

Parkinson's disease is one of the mostly seen neurological disease. It affects to nervous system and hinders people's vital activities. The majority of Parkinson's patients lose their ability to speak, write and balance. Many machine learning methods have been proposed to automatically diagnose Parkinson's disease using acoustic, hand writing and gaits. In this study, a statistical pooling method is proposed to recognize Parkinson's disease using the vowels. The used Parkinson's disease dataset contains the features of vowels. In the proposed method, the features of dataset are increased by applying statistical pooling method. Then, the most weighted features are selected from increased feature vector by using ReliefF. The classification is applied using the most weighted feature vector obtained. In the proposed method, Support Vector Machine (SVM) and K Nearest Neighbor (KNN) algorithms are used. The success rate was calculated as 91.25% and 91.23% with by using SVM and KNN respectively. The proposed method has two main contributions. The first is to obtain new features from the Parkinson's acoustic dataset using the statistical pooling method. The second one is the selection of the most significant features from the many feature vectors obtained. Thus, successful results were obtained for both KNN and SVM algorithms. The comparatively results clearly show that the proposed method achieved the best success rate among the selected state-of-art methods. Considering the proposed method and the results obtained, it proposed method is successful for Parkinson's disease recognition.

Yaman Orhan, Ertam Fatih, Tuncer Turker

2019-Nov-11

Acoustic features, KNN, Parkinson’s disease recognition, SVM, Statistical pooling

Radiology Radiology

Denoising arterial spin labeling perfusion MRI with deep machine learning.

In Magnetic resonance imaging ; h5-index 0.0

PURPOSE : Arterial spin labeling (ASL) perfusion MRI is a noninvasive technique for measuring cerebral blood flow (CBF) in a quantitative manner. A technical challenge in ASL MRI is data processing because of the inherently low signal-to-noise-ratio (SNR). Deep learning (DL) is an emerging machine learning technique that can learn a nonlinear transform from acquired data without using any explicit hypothesis. Such a high flexibility may be particularly beneficial for ASL denoising. In this paper, we proposed and validated a DL-based ASL MRI denoising algorithm (DL-ASL).

METHODS : The DL-ASL network was constructed using convolutional neural networks (CNNs) with dilated convolution and wide activation residual blocks to explicitly take the inter-voxel correlations into account, and preserve spatial resolution of input image during model learning.

RESULTS : DL-ASL substantially improved the quality of ASL CBF in terms of SNR. Based on retrospective analyses, DL-ASL showed a high potential of reducing 75% of the original acquisition time without sacrificing CBF measurement quality.

CONCLUSION : DL-ASL achieved improved denoising performance for ASL MRI as compared with current routine methods in terms of higher PSNR, SSIM and Radiologic scores. With the help of DL-ASL, much fewer repetitions may be prescribed in ASL MRI, resulting in a great reduction of the total acquisition time.

Xie Danfeng, Li Yiran, Yang Hanlu, Bai Li, Wang Tianyao, Zhou Fuqing, Zhang Lei, Wang Ze

2020-Jan-15

Arterial spin labeling, Deep learning, Denoising, Machine learning, Perfusion MRI

Dermatology Dermatology

Attitudes towards artificial intelligence within dermatology: an international online survey.

In The British journal of dermatology ; h5-index 0.0

Artificial intelligence (AI) has emerged as a hot topic within dermatology and during the last years several studies have demonstrated its benefits in a research setting. While this development is unravelling rapidly and has also been made available to consumers, little is known about the attitudes towards AI among dermatologists. To increase our understanding of dermatologists' attitudes towards AI within dermatology we prepared an anonymous and voluntary online survey including 29 questions and distributed it to dermatologists through several online channels including mailing lists to members of the International Dermoscopy Society.

Polesie S, Gillstedt M, Kittler H, Lallas A, Tschandl P, Zalaudek I, Paoli J

2020-Jan-17

Public Health Public Health

Predictive Modeling for Metabolomics Data.

In Methods in molecular biology (Clifton, N.J.) ; h5-index 0.0

In recent years, mass spectrometry (MS)-based metabolomics has been extensively applied to characterize biochemical mechanisms, and study physiological processes and phenotypic changes associated with disease. Metabolomics has also been important for identifying biomarkers of interest suitable for clinical diagnosis. For the purpose of predictive modeling, in this chapter, we will review various supervised learning algorithms such as random forest (RF), support vector machine (SVM), and partial least squares-discriminant analysis (PLS-DA). In addition, we will also review feature selection methods for identifying the best combination of metabolites for an accurate predictive model. We conclude with best practices for reproducibility by including internal and external replication, reporting metrics to assess performance, and providing guidelines to avoid overfitting and to deal with imbalanced classes. An analysis of an example data will illustrate the use of different machine learning methods and performance metrics.

Ghosh Tusharkanti, Zhang Weiming, Ghosh Debashis, Kechris Katerina

2020

Mass spectrometry, Metabolomics, Performance Metrics, Predictive Modeling, Supervised learning

Public Health Public Health

Development and validation of machine learning models to predict gastrointestinal leak and venous thromboembolism after weight loss surgery: an analysis of the MBSAQIP database.

In Surgical endoscopy ; h5-index 65.0

BACKGROUND : Postoperative gastrointestinal leak and venous thromboembolism (VTE) are devastating complications of bariatric surgery. The performance of currently available predictive models for these complications remains wanting, while machine learning has shown promise to improve on traditional modeling approaches. The purpose of this study was to compare the ability of two machine learning strategies, artificial neural networks (ANNs), and gradient boosting machines (XGBs) to conventional models using logistic regression (LR) in predicting leak and VTE after bariatric surgery.

METHODS : ANN, XGB, and LR prediction models for leak and VTE among adults undergoing initial elective weight loss surgery were trained and validated using preoperative data from 2015 to 2017 from Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program database. Data were randomly split into training, validation, and testing populations. Model performance was measured by the area under the receiver operating characteristic curve (AUC) on the testing data for each model.

RESULTS : The study cohort contained 436,807 patients. The incidences of leak and VTE were 0.70% and 0.46%. ANN (AUC 0.75, 95% CI 0.73-0.78) was the best-performing model for predicting leak, followed by XGB (AUC 0.70, 95% CI 0.68-0.72) and then LR (AUC 0.63, 95% CI 0.61-0.65, p < 0.001 for all comparisons). In detecting VTE, ANN, and XGB, LR achieved similar AUCs of 0.65 (95% CI 0.63-0.68), 0.67 (95% CI 0.64-0.70), and 0.64 (95% CI 0.61-0.66), respectively; the performance difference between XGB and LR was statistically significant (p = 0.001).

CONCLUSIONS : ANN and XGB outperformed traditional LR in predicting leak. These results suggest that ML has the potential to improve risk stratification for bariatric surgery, especially as techniques to extract more granular data from medical records improve. Further studies investigating the merits of machine learning to improve patient selection and risk management in bariatric surgery are warranted.

Nudel Jacob, Bishara Andrew M, de Geus Susanna W L, Patil Prasad, Srinivasan Jayakanth, Hess Donald T, Woodson Jonathan

2020-Jan-17

Anastomotic leak, Bariatric surgery, Deep learning, Machine learning, Postoperative complications, Venous thromboembolism

Radiology Radiology

Dual-energy CT-based deep learning radiomics can improve lymph node metastasis risk prediction for gastric cancer.

In European radiology ; h5-index 62.0

OBJECTIVES : To build a dual-energy CT (DECT)-based deep learning radiomics nomogram for lymph node metastasis (LNM) prediction in gastric cancer.

MATERIALS AND METHODS : Preoperative DECT images were retrospectively collected from 204 pathologically confirmed cases of gastric adenocarcinoma (mean age, 58 years; range, 28-81 years; 157 men [mean age, 60 years; range, 28-81 years] and 47 women [mean age, 54 years; range, 28-79 years]) between November 2011 and October 2018, They were divided into training (n = 136) and test (n = 68) sets. Radiomics features were extracted from monochromatic images at arterial phase (AP) and venous phase (VP). Clinical information, CT parameters, and follow-up data were collected. A radiomics nomogram for LNM prediction was built using deep learning approach and evaluated in test set using ROC analysis. Its prognostic performance was determined with Harrell's concordance index (C-index) based on patients' outcomes.

RESULTS : The dual-energy CT radiomics signature was associated with LNM in two sets (Mann-Whitney U test, p < 0.001) and an achieved area under the ROC curve (AUC) of 0.71 for AP and 0.76 for VP in test set. The nomogram incorporated the two radiomics signatures and CT-reported lymph node status exhibited AUCs of 0.84 in the training set and 0.82 in the test set. The C-indices of the nomogram for progression-free survival and overall survival prediction were 0.64 (p = 0.004) and 0.67 (p = 0.002).

CONCLUSION : The DECT-based deep learning radiomics nomogram showed good performance in predicting LNM in gastric cancer. Furthermore, it was significantly associated with patients' prognosis.

KEY POINTS : • This study investigated the value of deep learning dual-energy CT-based radiomics in predicting lymph node metastasis in gastric cancer. • The dual-energy CT-based radiomics nomogram outweighed the single-energy model and the clinical model. • The nomogram also exhibited a significant prognostic ability for patient survival and enriched radiomics studies.

Li Jing, Dong Di, Fang Mengjie, Wang Rui, Tian Jie, Li Hailiang, Gao Jianbo

2020-Jan-17

Deep learning, Gastric cancer, Lymph node, Radiomics, Tomography, X-ray computed

General General

Deep neural networks for automated detection of marine mammal species.

In Scientific reports ; h5-index 158.0

Deep neural networks have advanced the field of detection and classification and allowed for effective identification of signals in challenging data sets. Numerous time-critical conservation needs may benefit from these methods. We developed and empirically studied a variety of deep neural networks to detect the vocalizations of endangered North Atlantic right whales (Eubalaena glacialis). We compared the performance of these deep architectures to that of traditional detection algorithms for the primary vocalization produced by this species, the upcall. We show that deep-learning architectures are capable of producing false-positive rates that are orders of magnitude lower than alternative algorithms while substantially increasing the ability to detect calls. We demonstrate that a deep neural network trained with recordings from a single geographic region recorded over a span of days is capable of generalizing well to data from multiple years and across the species' range, and that the low false positives make the output of the algorithm amenable to quality control for verification. The deep neural networks we developed are relatively easy to implement with existing software, and may provide new insights applicable to the conservation of endangered species.

Shiu Yu, Palmer K J, Roch Marie A, Fleishman Erica, Liu Xiaobai, Nosal Eva-Marie, Helble Tyler, Cholewiak Danielle, Gillespie Douglas, Klinck Holger

2020-Jan-17

General General

Monitoring canid scent marking in space and time using a biologging and machine learning approach.

In Scientific reports ; h5-index 158.0

For canid species, scent marking plays a critical role in territoriality, social dynamics, and reproduction. However, due in part to human dependence on vision as our primary sensory modality, research on olfactory communication is hampered by a lack of tractable methods. In this study, we leverage a powerful biologging approach, using accelerometers in concert with GPS loggers to monitor and describe scent-marking events in time and space. We performed a validation experiment with domestic dogs, monitoring them by video concurrently with the novel biologging approach. We attached an accelerometer to the pelvis of 31 dogs (19 males and 12 females), detecting raised-leg and squat posture urinations by monitoring the change in device orientation. We then deployed this technique to describe the scent marking activity of 3 guardian dogs as they defend livestock from coyote depredation in California, providing an example use-case for the technique. During validation, the algorithm correctly classified 92% of accelerometer readings. High performance was partly due to the conspicuous signatures of archetypal raised-leg postures in the accelerometer data. Accuracy did not vary with the weight, age, and sex of the dogs, resulting in a method that is broadly applicable across canid species' morphologies. We also used models trained on each individual to detect scent marking of others to emulate the use of captive surrogates for model training. We observed no relationship between the similarity in body weight between the dog pairs and the overall accuracy of predictions, although models performed best when trained and tested on the same individual. We discuss how existing methods in the field of movement ecology can be extended to use this exciting new data type. This paper represents an important first step in opening new avenues of research by leveraging the power of modern-technologies and machine-learning to this field.

Bidder Owen R, di Virgilio Agustina, Hunter Jennifer S, McInturff Alex, Gaynor Kaitlyn M, Smith Alison M, Dorcy Janelle, Rosell Frank

2020-Jan-17

oncology Oncology

Machine learning can identify newly diagnosed patients with CLL at high risk of infection.

In Nature communications ; h5-index 260.0

Infections have become the major cause of morbidity and mortality among patients with chronic lymphocytic leukemia (CLL) due to immune dysfunction and cytotoxic CLL treatment. Yet, predictive models for infection are missing. In this work, we develop the CLL Treatment-Infection Model (CLL-TIM) that identifies patients at risk of infection or CLL treatment within 2 years of diagnosis as validated on both internal and external cohorts. CLL-TIM is an ensemble algorithm composed of 28 machine learning algorithms based on data from 4,149 patients with CLL. The model is capable of dealing with heterogeneous data, including the high rates of missing data to be expected in the real-world setting, with a precision of 72% and a recall of 75%. To address concerns regarding the use of complex machine learning algorithms in the clinic, for each patient with CLL, CLL-TIM provides explainable predictions through uncertainty estimates and personalized risk factors.

Agius Rudi, Brieghel Christian, Andersen Michael A, Pearson Alexander T, Ledergerber Bruno, Cozzi-Lepri Alessandro, Louzoun Yoram, Andersen Christen L, Bergstedt Jacob, von Stemann Jakob H, Jørgensen Mette, Tang Man-Hung Eric, Fontes Magnus, Bahlo Jasmin, Herling Carmen D, Hallek Michael, Lundgren Jens, MacPherson Cameron Ross, Larsen Jan, Niemann Carsten U

2020-Jan-17

General General

Week 35

General General

Decoupling and decomposition analysis of industrial sulfur dioxide emissions from the industrial economy in 30 Chinese provinces.

In Journal of environmental management ; h5-index 0.0

As one of the largest emitters of sulfur dioxide (SO2), China has faced increasing pressure to achieve sustainable development. This study investigates the decoupling relationship between industrial SO2 emissions and the industrial economy in China during 1996-2015. According to the decoupling results, the study period is divided into four stages: 1996-2001, 2001-2006, 2006-2010, and 2010-2015. These four stages are closely aligned with the major adjustments of the national socio-economic policies. Then, the logarithmic mean Divisia index (LMDI) decomposition method is used to analyze the driving factors of industrial SO2 emissions. The results demonstrate that the SO2 generation intensity and SO2 abatement are the major contributors to reducing industrial SO2 emissions, while the economic activity effect is the primary inhibitory factor. Moreover, the provincial results show that most provinces with weak decoupling state since 2006 are located in less developed provinces with energy-intensive industries. Besides, the economic structure and SO2 generation intensity show negative contributions to reducing industrial SO2 emissions in some of these regions. Based on the results, the attention should be focused on cleaner production to reduce industrial SO2 emissions further, and environmental policies should be tailored to local conditions.

Qian Yuan, Cao Hui, Huang Simin

2020-Jan-16

Decomposition analysis, Decoupling analysis, Industrial sulfur dioxide emissions, LMDI method

General General

Diverse approaches to predicting drug-induced liver injury using gene-expression profiles.

In Biology direct ; h5-index 0.0

BACKGROUND : Drug-induced liver injury (DILI) is a serious concern during drug development and the treatment of human disease. The ability to accurately predict DILI risk could yield significant improvements in drug attrition rates during drug development, in drug withdrawal rates, and in treatment outcomes. In this paper, we outline our approach to predicting DILI risk using gene-expression data from Build 02 of the Connectivity Map (CMap) as part of the 2018 Critical Assessment of Massive Data Analysis CMap Drug Safety Challenge.

RESULTS : First, we used seven classification algorithms independently to predict DILI based on gene-expression values for two cell lines. Similar to what other challenge participants observed, none of these algorithms predicted liver injury on a consistent basis with high accuracy. In an attempt to improve accuracy, we aggregated predictions for six of the algorithms (excluding one that had performed exceptionally poorly) using a soft-voting method. This approach also failed to generalize well to the test set. We investigated alternative approaches-including a multi-sample normalization method, dimensionality-reduction techniques, a class-weighting scheme, and expanding the number of hyperparameter combinations used as inputs to the soft-voting method. We met limited success with each of these solutions.

CONCLUSIONS : We conclude that alternative methods and/or datasets will be necessary to effectively predict DILI in patients based on RNA expression levels in cell lines.

REVIEWERS : This article was reviewed by Paweł P Labaj and Aleksandra Gruca (both nominated by David P Kreil).

Sumsion G Rex, Bradshaw Michael S, Beales Jeremy T, Ford Emi, Caryotakis Griffin R G, Garrett Daniel J, LeBaron Emily D, Nwosu Ifeanyichukwu O, Piccolo Stephen R

2020-Jan-15

Cell lines, Classification, Drug development, Machine learning, Precision medicine

General General

PRAP: Pan Resistome analysis pipeline.

In BMC bioinformatics ; h5-index 0.0

BACKGROUND : Antibiotic resistance genes (ARGs) can spread among pathogens via horizontal gene transfer, resulting in imparities in their distribution even within the same species. Therefore, a pan-genome approach to analyzing resistomes is necessary for thoroughly characterizing patterns of ARGs distribution within particular pathogen populations. Software tools are readily available for either ARGs identification or pan-genome analysis, but few exist to combine the two functions.

RESULTS : We developed Pan Resistome Analysis Pipeline (PRAP) for the rapid identification of antibiotic resistance genes from various formats of whole genome sequences based on the CARD or ResFinder databases. Detailed annotations were used to analyze pan-resistome features and characterize distributions of ARGs. The contribution of different alleles to antibiotic resistance was predicted by a random forest classifier. Results of analysis were presented in browsable files along with a variety of visualization options. We demonstrated the performance of PRAP by analyzing the genomes of 26 Salmonella enterica isolates from Shanghai, China.

CONCLUSIONS : PRAP was effective for identifying ARGs and visualizing pan-resistome features, therefore facilitating pan-genomic investigation of ARGs. This tool has the ability to further excavate potential relationships between antibiotic resistance genes and their phenotypic traits.

He Yichen, Zhou Xiujuan, Chen Ziyan, Deng Xiangyu, Gehring Andrew, Ou Hongyu, Zhang Lida, Shi Xianming

2020-Jan-15

Identification, Machine learning, Pan-resistome, Visualization

General General

Semi-Automated Evidence Synthesis in Health Psychology: Current Methods and Future Prospects.

In Health psychology review ; h5-index 0.0

The evidence base in health psychology is vast and growing rapidly. These factors make it difficult (and sometimes practically impossible) to consider all available evidence when making decisions about the state of knowledge on a given phenomenon (e.g., associations of variables, effects of interventions on particular outcomes). Systematic reviews, meta-analyses, and other rigorous syntheses of the research mitigate this problem by providing concise, actionable summaries of knowledge in a given area of study. Yet, conducting these syntheses has grown increasingly laborious owing to the fast accumulation of new evidence; existing, manual methods for synthesis do not scale well. In this article, we discuss how semi-automation via machine learning and natural language processing methods may help researchers and practitioners to review evidence more efficiently. We outline concrete examples in health psychology, highlighting practical, open-source technologies available now. We indicate the potential of more advanced methods and discuss how to avoid the pitfalls of automated reviews.

Marshall Iain J, Johnson Blair T, Wang Zigeng, Rajasekaran Sanguthevar, Wallace Byron C

2020-Jan-15

Machine learning, evidence synthesis, health psychology, natural language processing, semi-automation, systematic review

General General

Identifying playing talent in professional football using artificial neural networks.

In Journal of sports sciences ; h5-index 52.0

The aim of the current study was to objectively identify position-specific key performance indicators in professional football that predict out-field players league status. The sample consisted of 966 out-field players who completed the full 90 minutes in a match during the 2008/09 or 2009/10 season in the Football League Championship. Players were assigned to one of three categories (0, 1 and 2) based on where they completed most of their match time in the following season, and then split based on five playing positions. 340 performance, biographical and esteem variables were analysed using a Stepwise Artificial Neural Network approach. The models correctly predicted between 72.7% and 100% of test cases (Mean prediction of models = 85.9%), the test error ranged from 1.0% to 9.8% (Mean test error of models = 6.3%). Variables related to passing, shooting, regaining possession and international appearances were key factors in the predictive models. This is highly significant as objective position-specific predictors of players league status have not previously been published. The method could be used to aid the identification and comparison of transfer targets as part of the due diligence process in professional football.

Barron Donald, Ball Graham, Robins Matthew, Sunderland Caroline

2020-Jan-15

Premier League, Soccer, artificial intelligence, championship, talent identification

General General

Analytical classical density functionals from an equation learning network.

In The Journal of chemical physics ; h5-index 0.0

We explore the feasibility of using machine learning methods to obtain an analytic form of the classical free energy functional for two model fluids, hard rods and Lennard-Jones, in one dimension. The equation learning network proposed by Martius and Lampert [e-print arXiv:1610.02995 (2016)] is suitably modified to construct free energy densities which are functions of a set of weighted densities and which are built from a small number of basis functions with flexible combination rules. This setup considerably enlarges the functional space used in the machine learning optimization as compared to the previous work [S.-C. Lin and M. Oettel, SciPost Phys. 6, 025 (2019)] where the functional is limited to a simple polynomial form. As a result, we find a good approximation for the exact hard rod functional and its direct correlation function. For the Lennard-Jones fluid, we let the network learn (i) the full excess free energy functional and (ii) the excess free energy functional related to interparticle attractions. Both functionals show a good agreement with simulated density profiles for thermodynamic parameters inside and outside the training region.

Lin S-C, Martius G, Oettel M

2020-Jan-14

Radiology Radiology

One-slice CT image based kernelized radiomics model for the prediction of low/mid-grade and high-grade HNSCC.

In Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society ; h5-index 0.0

An accurate grade prediction can help to appropriate treatment strategy and effective diagnosis to Head and neck squamous cell carcinoma (HNSCC). Radiomics has been studied for the prediction of carcinoma characteristics in medical images. The success of previous researches in radiomics is attributed to the availability of annotated all-slice medical images. However, it is very challenging to annotate all slices, as annotating biomedical images is not only tedious, laborious, and time consuming, but also demanding of costly, specialty-oriented skills, which are not easily accessible. To address this problem, this paper presents a model to integrate radiomics and kernelized dimension reduction into a single framework, which maps handcrafted radiomics features to a kernelized space where they are linearly separable and then reduces the dimension of features through principal component analysis. Three methods including baseline radiomics models, proposed kernelized model and convolutional neural network (CNN) model were compared in experiments. Results suggested proposed kernelized model best fit in one-slice data. We reached AUC of 95.91 % on self-made one-slice dataset, 67.33 % in predicting localregional recurrence on H&N dataset and 64.33 % on H&N1 dataset. While all other models were <76 %, <65 %, and <62 %. Though CNN model reached an incredible performance when predicting distant metastasis on H&N (AUC 0.88), model faced serious problem of overfitting in small datasets. When changing all-slice data to one-slice on both H&N and H&N1, proposed model suffered less loss on AUC (<1.3 %) than any other models (>3 %). These proved our proposed model is efficient to deal with the one-slice problem and makes using one-slice data to reduce annotation cost practical. This is attributed to the several advantages derived from the proposed kernelized radiomics model, including (1) the prior radiomics features reduced the demanding of huge amount of data and avoided overfitting; (2) the kernelized method mined the potential information contributed to predict; (3) generating principal components in kernelized features reduced redundant features.

Ye Junyong, Luo Jin, Xu Shengsheng, Wu Wenli

2019-Dec-23

Annotation cost reduction, Feature decomposition, Head and neck squamous cell carcinoma, Machine learning, Radiomics

Radiology Radiology

Performance of a deep learning algorithm for the evaluation of CAD-RADS classification with CCTA.

In Atherosclerosis ; h5-index 71.0

BACKGROUND AND AIMS : Artificial intelligence (AI) is increasing its role in diagnosis of patients with suspicious coronary artery disease. The aim of this manuscript is to develop a deep convolutional neural network (CNN) to classify coronary computed tomography angiography (CCTA) in the correct Coronary Artery Disease Reporting and Data System (CAD-RADS) category.

METHODS : Two hundred eighty eight patients who underwent clinically indicated CCTA were included in this single-center retrospective study. The CCTAs were stratified by CAD-RADS scores by expert readers and considered as reference standard. A deep CNN was designed and tested on the CCTA dataset and compared to on-site reading. The deep CNN analyzed the diagnostic accuracy of the following three Models based on CAD-RADS classification: Model A (CAD-RADS 0 vs CAD-RADS 1-2 vs CAD-RADS 3,4,5), Model 1 (CAD-RADS 0 vs CAD-RADS>0), Model 2 (CAD-RADS 0-2 vs CAD-RADS 3-5). Time of analysis for both physicians and CNN were recorded.

RESULTS : Model A showed a sensitivity, specificity, negative predictive value, positive predictive value and accuracy of 47%, 74%, 77%, 46% and 60%, respectively. Model 1 showed a sensitivity, specificity, negative predictive value, positive predictive value and accuracy of 66%, 91%, 92%, 63%, 86%, respectively. Conversely, Model 2 demonstrated the following sensitivity, specificity, negative predictive value, positive predictive value and accuracy: 82%, 58%, 74%, 69%, 71%, respectively. Time of analysis was significantly lower using CNN as compared to on-site reading (530.5 ± 179.1 vs 104.3 ± 1.4 sec, p=0.01) CONCLUSIONS: Deep CNN yielded accurate automated classification of patients with CAD-RADS.

Muscogiuri Giuseppe, Chiesa Mattia, Trotta Michela, Gatti Marco, Palmisano Vitanio, Dell’Aversana Serena, Baessato Francesca, Cavaliere Annachiara, Cicala Gloria, Loffreno Antonella, Rizzon Giulia, Guglielmo Marco, Baggiano Andrea, Fusini Laura, Saba Luca, Andreini Daniele, Pepi Mauro, Rabbat Mark G, Guaricci Andrea I, De Cecco Carlo N, Colombo Gualtiero, Pontone Gianluca

2019-Dec-23

Artificial intelligence, CADRADS, Convolutional neural network, Coronary artery disease, Plaque characterization

General General

AdvKin: Adversarial Convolutional Network for Kinship Verification.

In IEEE transactions on cybernetics ; h5-index 0.0

Kinship verification in the wild is an interesting and challenging problem. The goal of kinship verification is to determine whether a pair of faces are blood relatives or not. Most previous methods for kinship verification can be divided as handcrafted features-based shallow learning methods and convolutional neural network (CNN)-based deep-learning methods. Nevertheless, these methods are still facing the challenging task of recognizing kinship cues from facial images. The reason is that the family ID information and the distribution difference of pairwise kin-faces are rarely considered in kinship verification tasks. To this end, a family ID-based adversarial convolutional network (AdvKin) method focused on discriminative Kin features is proposed for both small-scale and large-scale kinship verification in this article. The merits of this article are four-fold: 1) for kin-relation discovery, a simple yet effective self-adversarial mechanism based on a negative maximum mean discrepancy (NMMD) loss is formulated as attacks in the first fully connected layer; 2) a pairwise contrastive loss and family ID-based softmax loss are jointly formulated in the second and third fully connected layer, respectively, for supervised training; 3) a two-stream network architecture with residual connections is proposed in AdvKin; and 4) for more fine-grained deep kin-feature augmentation, an ensemble of patch-wise AdvKin networks is proposed (E-AdvKin). Extensive experiments on 4 small-scale benchmark KinFace datasets and 1 large-scale families in the wild (FIW) dataset from the first Large-Scale Kinship Recognition Data Challenge, show the superiority of our proposed AdvKin model over other state-of-the-art approaches.

Zhang Lei, Duan Qingyan, Zhang David, Jia Wei, Wang Xizhao

2020-Jan-14

General General

Robust Cumulative Crowdsourcing Framework Using New Incentive Payment Function and Joint Aggregation Model.

In IEEE transactions on neural networks and learning systems ; h5-index 0.0

In recent years, crowdsourcing has gained tremendous attention in the machine learning community due to the increasing demand for labeled data. However, the labels collected by crowdsourcing are usually unreliable and noisy. This issue is mainly caused by: 1) nonflexible data collection mechanisms; 2) nonincentive payment functions; and 3) inexpert crowd workers. We propose a new robust crowdsourcing framework as a comprehensive solution for all these challenging problems. Our unified framework consists of three novel components. First, we introduce a new flexible data collection mechanism based on the cumulative voting system, allowing crowd workers to express their confidence for each option in multi-choice questions. Second, we design a novel payment function regarding the settings of our data collection mechanism. The payment function is theoretically proved to be incentive-compatible, encouraging crowd workers to disclose truthfully their beliefs to get the maximum payment. Third, we propose efficient aggregation models, which are compatible with both single-option and multi-option crowd labels. We define a new aggregation model, called simplex constrained majority voting (SCMV), and enhance it by using the probabilistic generative model. Furthermore, fast optimization algorithms are derived for the proposed aggregation models. Experimental results indicate higher quality for the crowd labels collected by our proposed mechanism without increasing the cost. Our aggregation models also outperform the state-of-the-art models on multiple crowdsourcing data sets in terms of accuracy and convergence speed.

Dizaji Kamran Ghasedi, Gao Hongchang, Yang Yanhua, Huang Heng, Deng Cheng

2020-Jan-14

General General

An Efficient Group Recommendation Model With Multiattention-Based Neural Networks.

In IEEE transactions on neural networks and learning systems ; h5-index 0.0

Group recommendation research has recently received much attention in a recommender system community. Currently, several deep-learning-based methods are used in group recommendation to learn preferences of groups on items and predict the next ones in which groups may be interested. However, their recommendation effectiveness is disappointing. To address this challenge, this article proposes a novel model called a multiattention-based group recommendation model (MAGRM). It well utilizes multiattention-based deep neural network structures to achieve accurate group recommendation. We train its two closely related modules: vector representation for group features and preference learning for groups on items. The former is proposed to learn to accurately represent each group's deep semantic features. It integrates four aspects of subfeatures: group co-occurrence, group description, and external and internal social features. In particular, we employ multiattention networks to learn to capture internal social features for groups. The latter employs a neural attention mechanism to depict preference interactions between each group and its members and then combines group and item features to accurately learn group preferences on items. Through extensive experiments on two real-world databases, we show that MAGRM remarkably outperforms the state-of-the-art methods in solving a group recommendation problem.

Huang Zhenhua, Xu Xin, Zhu Honghao, Zhou MengChu

2020-Jan-15

General General

Protein Family Classification from Scratch: A CNN based Deep Learning Approach.

In IEEE/ACM transactions on computational biology and bioinformatics ; h5-index 0.0

Next-generation sequencing techniques provide us with an opportunity for generating sequenced proteins and identifying the biological families and functions of these proteins. However, compared with identified proteins, uncharacterized proteins consist of a notable percentage of the overall proteins in the bioinformatics research field. Traditional family classification methods often devote themselves to extracting N-Gram features from sequences while ignoring motif information as well as affinity information between motifs and adjacent amino acids. Previous clustering-based algorithms have typically been used to define protein features with domain knowledge and annotate protein families based on extensive data samples. In this paper, we apply CNN based amino acid representation learning with limited characterized proteins to explore the performances of annotated protein families by taking into account the amino acid location information. Additionally, we apply the method to all reviewed protein sequences with their families retrieved from the UniProt database to evaluate our approach. Last but not least, we verify our model using those unreviewed protein records, which is typically ignored by other methods.

Zhang Da, Kabuka Mansur

2020-Jan-14

General General

Assessment of Balance Control Subsystems by Artificial Intelligence.

In IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society ; h5-index 0.0

ecent studies have shown that balance performance assessment based on artificial intelligence (AI) is feasible. However, balance control is very complex and requires different subsystems to participate, which have not been evaluated individually yet. Furthermore, these studies only classified individual's balance performance across limited grades. Therefore, in this study we attempted to implement AI to precisely evaluate different types of balance control subsystems (BCSes). First, a total of 224 commonly used and newly developed features were extracted from the center of pressure (CoP) data for each participant, respectively. Then, regressors were employed in order to map these features to the evaluation scores given by physical therapists, which include the total score in Mini-Balance-Evaluation-Systems-Tests (Mini-BESTest) and its sub-scores on BCSes, namely anticipatory postural adjustments (APA), reactive postural control (RPC), sensory orientation (SO), and dynamic gait (DG). Their scoring ranges should be 0-28, 0-6, 0-6, 0-6, and 0-10, respectively. The results show that their minimum mean absolute errors from AI estimation reach up to 2.658, 0.827, 0.970, 0.642, and 0.98, respectively. In sum, our study is a preliminary study for assessing BCSes based on AI, which shows its possibility to be used in the clinics in the future.

Ren Peng, Huang Sunpei, Feng Yukun, Chen Jinying, Wang Qing, Guo Yanbo, Yuan Qi, Yao Dezhong, Ma Dan

2020-Jan-15

General General

Connecting Image Denoising and High-Level Vision Tasks via Deep Learning.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society ; h5-index 0.0

Image denoising and high-level vision tasks are usually handled independently in the conventional practice of computer vision, and their connection is fragile. In this paper, we cope with the two jointly and explore the mutual influence between them with the focus on two questions, namely (1) how image denoising can help improving high-level vision tasks, and (2) how the semantic information from high-level vision tasks can be used to guide image denoising. First for image denoising we propose a convolutional neural network in which convolutions are conducted in various spatial resolutions via downsampling and upsampling operations in order to fuse and exploit contextual information on different scales. Second we propose a deep neural network solution that cascades two modules for image denoising and various high-level tasks, respectively, and use the joint loss for updating only the denoising network via backpropagation. We experimentally show that on one hand, the proposed denoiser has the generality to overcome the performance degradation of different high-level vision tasks. On the other hand, with the guidance of high-level vision information, the denoising network produces more visually appealing results. Extensive experiments demonstrate the benefit of exploiting image semantics simultaneously for image denoising and highlevel vision tasks via deep learning. The code is available online: https://github.com/Ding-Liu/DeepDenoising.

Liu Ding, Wen Bihan, Jiao Jianbo, Liu Xianming, Wang Zhangyang, Huang Thomas S

2020-Jan-15

General General

Multi-modal Diagnosis of Infectious Diseases in the Developing World.

In IEEE journal of biomedical and health informatics ; h5-index 0.0

In low and middle income countries, infectious diseases continue to have a significant impact, particularly amongst the poorest in society. Tetanus and hand foot and mouth disease (HFMD) are two such diseases and, in both, death is associated with autonomic nervous system dysfunction (ANSD). Currently, photoplethysmogram or electrocardiogram monitoring is used to detect deterioration in these patients, however expensive clinical monitors are often required. In this study, we employ low-cost and mobile wearable devices to collect patient vital signs unobtrusively; and we develop machine learning algorithms for automatic and rapid triage of patients that provide efficient use of clinical resources. Existing methods are mainly dependent on the prior detection of clinical features with limited exploitation of multi-modal physiological data. Moreover, the latest developments in deep learning (e.g. cross-domain transfer learning) have not been sufficiently applied for infectious disease diagnosis. In this paper, we present a fusion of multi-modal physiological data to predict the severity of ANSD with a hierarchy of resource-aware decision making. First, an on-site triage process is performed using a simple classifier. Second, personalised longitudinal modelling is employed that takes the previous states of the patient into consideration. We have also employed a spectrogram representation of the physiological waveforms to exploit existing networks for cross-domain transfer learning, which avoids the laborious and data intensive process of training a network from scratch. Results show that the proposed framework has promising potential in supporting severity grading of infectious diseases in low-resources settings, such as in the the developing world.

Abebe Tadesse Girmaw, Javed Hamza, Thanh Nhan Le Nguyen, Ha Thai Hai Duong, Le Van Tan, Thwaites Louise, Clifton David, Zhu Tingting

2020-Jan-09

General General

Validation of a Deep Learning System for the Full Automation of Bite and Meal Duration Analysis of Experimental Meal Videos.

In Nutrients ; h5-index 86.0

Eating behavior can have an important effect on, and be correlated with, obesity and eating disorders. Eating behavior is usually estimated through self-reporting measures, despite their limitations in reliability, based on ease of collection and analysis. A better and widely used alternative is the objective analysis of eating during meals based on human annotations of in-meal behavioral events (e.g., bites). However, this methodology is time-consuming and often affected by human error, limiting its scalability and cost-effectiveness for large-scale research. To remedy the latter, a novel "Rapid Automatic Bite Detection" (RABiD) algorithm that extracts and processes skeletal features from videos was trained in a video meal dataset (59 individuals; 85 meals; three different foods) to automatically measure meal duration and bites. In these settings, RABiD achieved near perfect agreement between algorithmic and human annotations (Cohen's kappa κ = 0.894; F1-score: 0.948). Moreover, RABiD was used to analyze an independent eating behavior experiment (18 female participants; 45 meals; three different foods) and results showed excellent correlation between algorithmic and human annotations. The analyses revealed that, despite the changes in food (hash vs. meatballs), the total meal duration remained the same, while the number of bites were significantly reduced. Finally, a descriptive meal-progress analysis revealed that different types of food affect bite frequency, although overall bite patterns remain similar (the outcomes were the same for RABiD and manual). Subjects took bites more frequently at the beginning and the end of meals but were slower in-between. On a methodological level, RABiD offers a valid, fully automatic alternative to human meal-video annotations for the experimental analysis of human eating behavior, at a fraction of the cost and the required time, without any loss of information and data fidelity.

Konstantinidis Dimitrios, Dimitropoulos Kosmas, Langlet Billy, Daras Petros, Ioakimidis Ioannis

2020-Jan-13

bite-rate, deep learning, eating behavior, eating patterns, meal analysis, meal duration, mouthfuls, skeletal feature extraction

General General

When Gaussian Process Meets Big Data: A Review of Scalable GPs.

In IEEE transactions on neural networks and learning systems ; h5-index 0.0

The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process regression (GPR), a well-known nonparametric, and interpretable Bayesian model, which suffers from cubic complexity to data size. To improve the scalability while retaining desirable prediction quality, a variety of scalable GPs have been presented. However, they have not yet been comprehensively reviewed and analyzed to be well understood by both academia and industry. The review of scalable GPs in the GP community is timely and important due to the explosion of data size. To this end, this article is devoted to reviewing state-of-the-art scalable GPs involving two main categories: global approximations that distillate the entire data and local approximations that divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations that modify the prior but perform exact inference, posterior approximations that retain exact prior but perform approximate inference, and structured sparse approximations that exploit specific structures in kernel matrix; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and capability of scalable GPs are reviewed. Finally, the extensions and open issues of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.

Liu Haitao, Ong Yew-Soon, Shen Xiaobo, Cai Jianfei

2020-Jan-07

General General

Sleep Spindle Detection using RUSBoost and Synchrosqueezed Wavelet Transform.

In IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society ; h5-index 0.0

Sleep spindles are important electroencephalographic (EEG) waveforms in sleep medicine; however, it is burdensome even for experts to detect spindles, so automatic spindle detection methodologies have been investigated. Conventional methods utilize waveforms template matching or machine learning for detecting spindles. In the former approach, it is necessary to tune thresholds for individual adaptation, while the latter approach has the problem of imbalanced data because the amount of sleep spindles is small compared with the entire EEG data. The present work proposes a sleep spindle detection method that combines wavelet synchrosqueezed transform (SST) and random under-sampling boosting (RUSBoost). SST is a time-frequency analysis method suitable for extracting features of spindle waveforms. RUSBoost is a framework for coping with the imbalanced data problem. The proposed SST-RUS can deal with the imbalanced data in spindle detection and does not require threshold tuning because RUSBoost uses majority voting of weak classifiers for discrimination. The performance of SST-RUS was validated using an open-access database called the Montreal archives of sleep studies cohort 1 (MASS-C1), which showed an F-measure of 0.70 with a sensitivity of 76.9% and a positive predictive value of 61.2%. The proposed method can reduce the burden of PSG scoring.

Kinoshita Takafumi, Fujiwara Koichi, Kano Manabu, Ogawa Keiko, Sumi Yukiyoshi, Matsuo Masahiro, Kadotani Hiroshi

2020-Jan-07

General General

PWStableNet: Learning Pixel-wise Warping Maps for Video Stabilization.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society ; h5-index 0.0

As the videos captured by hand-held cameras are often perturbed by high-frequency jitters, stabilization of these videos is an essential task. Many video stabilization methods have been proposed to stabilize shaky videos. However, most methods estimate one global homography or several homographies based on fixed meshes to warp the shaky frames into their stabilized views. Due to the existence of parallax, such single or a few homographies can not well handle the depth variation. In contrast to these traditional methods, we propose a novel video stabilization network, called PWStableNet, which comes up pixel-wise warping maps, i.e., potentially different warping for different pixels, and stabilizes each pixel to its stabilized view. To our best knowledge, this is the first deep learning based pixel-wise video stabilization. The proposed method is built upon a multi-stage cascade encoder-decoder architecture and learns pixel-wise warping maps from consecutive unstable frames. Inter-stage connections are also introduced to add feature maps of a former stage to the corresponding feature maps at a latter stage, which enables the latter stage to learn the residual from the feature maps of former stages. This cascade architecture can produce more precise warping maps at latter stages. To ensure the correct learning of pixel-wise warping maps, we use a well-designed loss function to guide the training procedure of the proposed PWStableNet. The proposed stabilization method achieves comparable performance with traditional methods, but stronger robustness and much faster processing speed. Moreover, the proposed stabilization method outperforms some typical CNN-based stabilization methods, especially in videos with strong parallax. Codes will be provided at https://github.com/mindazhao/pix-pix-warping-video-stabilization.

Zhao Minda, Ling Qiang

2020-Jan-07

General General

Deepzzle: Solving Visual Jigsaw Puzzles with Deep Learning and Shortest Path Optimization.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society ; h5-index 0.0

We tackle the image reassembly problem with wide space between the fragments, in such a way that the patterns and colors continuity is mostly unusable. The spacing emulates the erosion of which the archaeological fragments suffer. We crop-square the fragments borders to compel our algorithm to learn from the content of the fragments. We also complicate the image reassembly by removing fragments and adding pieces from other sources. We use a two-step method to obtain the reassemblies: 1) a neural network predicts the positions of the fragments despite the gaps between them; 2) a graph that leads to the best reassemblies is made from these predictions. In this paper, we notably investigate the effect of branch-cut in the graph of reassemblies. We also provide a comparison with the literature, solve complex images reassemblies, explore at length the dataset, and propose a new metric that suits its specificities.

Paumard Marie-Morgane, Picard David, Tabia Hedi

2020-Jan-07

Surgery Surgery

Deep learning for US image quality assessment based on femoral cartilage boundaries detection in autonomous knee arthroscopy.

In IEEE transactions on ultrasonics, ferroelectrics, and frequency control ; h5-index 0.0

Knee arthroscopy is a complex minimally invasive surgery that can cause unintended injuries to femoral cartilage and/or post-operative complications. Autonomous robotic systems using real time volumetric ultrasound (US) imaging guidance hold potential for reducing significantly these issues and for improving patient outcomes. To enable the robotic system to navigate autonomously in the knee joint, the imaging system should provide the robot with a real-time comprehensive map of the surgical site. To this end, the first step is automatic image quality assessment, to ensure that the boundaries of the relevant knee structures are defined well enough to be detected, outlined and then tracked. In this paper, a recently developed one-class classifier deep learning algorithm was used to discriminate among US images acquired in a simulated surgical scenario on which the femoral cartilage either could or could not be outlined. 38,656 2D US images were extracted from 151 3D US volumes, collected from 6 volunteers, and were labelled as '1' or as '0' when an expert was or was not able to outline the cartilage on the image, respectively. The algorithm was evaluated using the expert labels as ground-truth with a 5-fold cross validation, where each fold was trained and tested on average with 15,640 and 6,246 labelled images, respectively. The algorithm reached a mean accuracy of 78.4 % ± 5.0, mean specificity of 72.5 % ± 9.4, mean sensitivity of 82.8 % ± 5.8 and mean area under the curve of 85 % ± 4.4. In addition, inter and intra observer tests involving two experts were performed on an image subset of 1536 2D US images. Percent agreement values of 0.89 and 0.93 were achieved between two experts (i.e., inter-observer) and by each expert (i.e., intra-observer), respectively. These results show the feasibility of the first essential step in the development of automatic US image acquisition and interpretation systems for autonomous robotic knee arthroscopy.

Antico Maria, Fontanarosa Davide, Carneiro Gustavo, Vukovic Damjan, Camps Saskia M, Sasazawa Fumio, Takeda Yu, Le Anh T H, Jaiprakash Anjali T, Roberts Jonathan, Crawford Ross

2020-Jan-09

General General

A Deep Learning approach to Photoacoustic Wavefront Localization in Deep-Tissue Medium.

In IEEE transactions on ultrasonics, ferroelectrics, and frequency control ; h5-index 0.0

Optical photons undergo strong scattering when propagating beyond one mm deep inside biological tissue. Finding the origin of these diffused optical wavefronts is a challenging task. Breaking through the optical diffusion limit, photoacoustic imaging (PAI) provides high-resolution and label-free images of human vasculature with high-contrast due to optical absorption of hemoglobin. In real time PAI, an ultrasound transducer array detects photoacoustic (PA) signals, and B-mode images are formed via delay-and-sum or frequency domain beamforming. Fundamentally, the strength of a PA signal is proportional to the local optical fluence, which decreases with increasing depth due to depth-dependent optical attenuation. This limits the visibility of deep tissue vasculature or other light absorbing photoacoustic targets. To address this practical challenge, an encoder-decoder convolutional neural network architecture was constructed with custom modules, and trained to identify the origin of photoacoustic wavefronts inside an optically scattering deep-tissue medium. A comprehensive ablation study provides strong evidence that each module improves localization accuracy. The network was trained on model-based simulated photoacoustic signals produced by 16,240 blood vessel targets subjected to both optical scattering and Gaussian noise. Test results on 4,600 simulated and five experimental PA signals collected under various scattering conditions show the network can localize the targets with a mean error less than 30 μm (standard deviation 20.9 μm) for the targets below 40 mm imaging depth, and 1.06 mm (standard deviation 2.68 mm) for targets at a depth between 40 mm and 60 mm. The proposed work has broad applications such as diffused optical wavefront shaping, circulating melanoma cell detection, and in real time vascular surgeries (e.g., deep vein thrombosis).

Johnstonbaugh Kerrick, Agrawal Sumit, Durairaj Deepit Abhishek, Fadden Christopher, Dangi Ajay, Karri Sri Phani Krishna, Kothapalli Sri-Rajasekhar

2020-Jan-07

Pathology Pathology

Explainable Anatomical Shape Analysis through Deep Hierarchical Generative Models.

In IEEE transactions on medical imaging ; h5-index 74.0

Quantification of anatomical shape changes currently relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of many conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled left ventricles when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set, and on hippocampi from healthy controls and patients with Alzheimer's disease when tested on ADNI data. More importantly, it enabled the visualisation in three-dimensions of both global and regional anatomical features which better discriminate between the conditions under exam. The proposed approach scales effectively to large populations, facilitating highthroughput analysis of normal anatomy and pathology in largescale studies of volumetric imaging.

Biffi Carlo, Doumou Georgia, Duan Jinming, Prasad Sanjay K, Cook Stuart A, O Regan Declan P, Rueckert Daniel, Cerrolaza Juan J, Tarroni Giacomo, Bai Wenjia, De Marvao Antonio, Oktay Ozan, Ledig Christian, Le Folgoc Loic, Kamnitsas Konstantinos

2020-Jan-06

General General

Radon Inversion via Deep Learning.

In IEEE transactions on medical imaging ; h5-index 74.0

The Radon transform is widely used in physical and life sciences, and one of its major applications is in medical X-ray computed tomography (CT), which is significantly important in disease screening and diagnosis. In this paper, we propose a novel reconstruction framework for Radon inversion with deep learning (DL) techniques. For simplicity, the proposed framework is denoted as iRadonMAP, i.e., inverse Radon transform approximation. Specifically, we construct an interpretable neural network that contains three dedicated components. The first component is a fully connected filtering (FCF) layer along the rotation angle direction in the sinogram domain, and the second one is a sinusoidal back-projection (SBP) layer, which back-projects the filtered sinogram data into the spatial domain. Next, a common network structure is added to further improve the overall performance. iRadonMAP is first pretrained on a large number of generic images from the ImageNet database and then fine-tuned with clinical patient data. The experimental results demonstrate the feasibility of the proposed iRadonMAP framework for Radon inversion.

He Ji, Wang Yongbo, Ma Jianhua

2020-Jan-06

General General

Enabling a Single Deep Learning Model for Accurate Gland Instance Segmentation: A Shape-aware Adversarial Learning Framework.

In IEEE transactions on medical imaging ; h5-index 74.0

Segmenting gland instances in histology images is highly challenging as it requires not only detecting glands from a complex background but also separating each individual gland instance with accurate boundary detection. However, due to the boundary uncertainty problem in manual annotations, pixel-to-pixel matching based loss functions are too restrictive for simultaneous gland detection and boundary detection. State-of-the-art approaches adopted multi-model schemes, resulting in unnecessarily high model complexity and difficulties in the training process. In this paper, we propose to use one single deep learning model for accurate gland instance segmentation. To address the boundary uncertainty problem, instead of pixel-to-pixel matching, we propose a segment-level shape similarity measure to calculate the curve similarity between each annotated boundary segment and the corresponding detected boundary segment within a fixed searching range. As the segment-level measure allows location variations within a fixed range for shape similarity calculation, it has better tolerance to boundary uncertainty and is more effective for boundary detection. Furthermore, by adjusting the radius of the searching range, the segment-level shape similarity measure is able to deal with different levels of boundary uncertainty. Therefore, in our framework, images of different scales are down-sampled and integrated to provide both global and local contextual information for training, which is helpful in segmenting gland instances of different sizes. To reduce the variations of multi-scale training images, by referring to adversarial domain adaptation, we propose a pseudo domain adaptation framework for feature alignment. By constructing loss functions based on the segment-level shape similarity measure, combining with the adversarial loss function, the proposed shape-aware adversarial learning framework enables one single deep learning model for gland instance segmentation. Experimental results on the 2015 MICCAI Gland Challenge dataset demonstrate that the proposed framework achieves state-of-the-art performance with one single deep learning model. As the boundary uncertainty problem widely exists in medical image segmentation, it is broadly applicable to other applications.

Yan Zengqiang, Yang Xin, Cheng Kwang-Ting

2020-Jan-14

Radiology Radiology

Prediction of pathological complete response to neoadjuvant chemotherapy in breast cancer using a deep learning (DL) method.

In Thoracic cancer ; h5-index 0.0

BACKGROUND : The aim of the study was to develop a deep learning (DL) algorithm to evaluate the pathological complete response (pCR) to neoadjuvant chemotherapy in breast cancer.

METHODS : A total of 302 breast cancer patients in this retrospective study were randomly divided into a training set (n = 244) and a validation set (n = 58). Tumor regions were manually delineated on each slice by two expert radiologists on enhanced T1-weighted images. Pathological results were used as ground truth. Deep learning network contained five repetitions of convolution and max-pooling layers and ended with three dense layers. The pre-NAC model and post-NAC model inputted six phases of pre-NAC and post-NAC images, respectively. The combined model used 12 channels from six phases of pre-NAC and six phases of post-NAC images. All models above included three indexes of molecular type as one additional input channel.

RESULTS : The training set contained 137 non-pCR and 107 pCR participants. The validation set contained 33 non-pCR and 25 pCR participants. The area under the receiver operating characteristic (ROC) curve (AUC) of three models was 0.553 for pre-NAC, 0.968 for post-NAC and 0.970 for the combined data, respectively. A significant difference was found in AUC between using pre-NAC data alone and combined data (P < 0.001). The positive predictive value of the combined model was greater than that of the post-NAC model (100% vs. 82.8%, P = 0.033).

CONCLUSION : This study established a deep learning model to predict PCR status after neoadjuvant therapy by combining pre-NAC and post-NAC MRI data. The model performed better than using pre-NAC data only, and also performed better than using post-NAC data only.

KEY POINTS : Significant findings of the study. It achieved an AUC of 0.968 for pCR prediction. It showed a significantly greater AUC than using pre-NAC data only. What this study adds This study established a deep learning model to predict PCR status after neoadjuvant therapy by combining pre-NAC and post-NAC MRI data.

Qu Yu-Hong, Zhu Hai-Tao, Cao Kun, Li Xiao-Ting, Ye Meng, Sun Ying-Shi

2020-Jan-16

Breast cancer, DCE-MRI, deep learning, pathologic complete response

General General

Analysis of Relevant Features from Photoplethysmographic Signals for Atrial Fibrillation Classification.

In International journal of environmental research and public health ; h5-index 73.0

Atrial Fibrillation (AF) is the most common cardiac arrhythmia found in clinical practice. It affects an estimated 33.5 million people, representing approximately 0.5% of the world's population. Electrocardiogram (ECG) is the main diagnostic criterion for AF. Recently, photoplethysmography (PPG) has emerged as a simple and portable alternative for AF detection. However, it is not completely clear which are the most important features of the PPG signal to perform this process. The objective of this paper is to determine which are the most relevant features for PPG signal analysis in the detection of AF. This study is divided into two stages: (a) a systematic review carried out following the Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA) statement in six databases, in order to identify the features of the PPG signal reported in the literature for the detection of AF, and (b) an experimental evaluation of them, using machine learning, in order to determine which have the greatest influence on the process of detecting AF. Forty-four features were found when analyzing the signal in the time, frequency, or time-frequency domains. From those 44 features, 27 were implemented, and through machine learning, it was found that only 11 are relevant in the detection process. An algorithm was developed for the detection of AF based on these 11 features, which obtained an optimal performance in terms of sensitivity (98.43%), specificity (99.52%), and accuracy (98.97%).

Millán César A, Girón Nathalia A, Lopez Diego M

2020-Jan-13

AF, PPG, atrial fibrillation, feature selection, photoplethysmography

General General

Cyber-Physiochemical Interfaces.

In Advanced materials (Deerfield Beach, Fla.) ; h5-index 0.0

Living things rely on various physical, chemical, and biological interfaces, e.g., somatosensation, olfactory/gustatory perception, and nervous system response. They help organisms to perceive the world, adapt to their surroundings, and maintain internal and external balance. Interfacial information exchanges are complicated but efficient, delicate but precise, and multimodal but unisonous, which has driven researchers to study the science of such interfaces and develop techniques with potential applications in health monitoring, smart robotics, future wearable devices, and cyber physical/human systems. To understand better the issues in these interfaces, a cyber-physiochemical interface (CPI) that is capable of extracting biophysical and biochemical signals, and closely relating them to electronic, communication, and computing technology, to provide the core for aforementioned applications, is proposed. The scientific and technical progress in CPI is summarized, and the challenges to and strategies for building stable interfaces, including materials, sensor development, system integration, and data processing techniques are discussed. It is hoped that this will result in an unprecedented multi-disciplinary network of scientific collaboration in CPI to explore much uncharted territory for progress, providing technical inspiration-to the development of the next-generation personal healthcare technology, smart sports-technology, adaptive prosthetics and augmentation of human capability, etc.

Wang Ting, Wang Ming, Yang Le, Li Zhuyun, Loh Xian Jun, Chen Xiaodong

2020-Jan-15

artificial intelligence, healthcare, physiochemical interfaces, stretchable sensors

General General

Automatic Classification of Bloodstain Patterns Caused by Gunshot and Blunt Impact at Various Distances.

In Journal of forensic sciences ; h5-index 0.0

The forensics discipline of bloodstain pattern analysis plays an important role in crime scene analysis and reconstruction. One reconstruction question is whether the blood has been spattered via gunshot or blunt impact such as beating or stabbing. This paper proposes an automated framework to classify bloodstain spatter patterns generated under controlled conditions into either gunshot or blunt impact classes. Classification is performed using machine learning. The study is performed with 94 blood spatter patterns which are available as public data sets, designs a set of features with possible relevance to classification, and uses the random forests method to rank the most useful features and perform classification. The study shows that classification accuracy decreases with the increasing distance between the target surface collecting the stains and the blood source. Based on the data set used in this study, the model achieves 99% accuracy in classifying spatter patterns at distances of 30 cm, 93% accuracy at distances of 60 cm, and 86% accuracy at distances of 120 cm. Results with 10 additional backspatter patterns also show that the presence of muzzle gases can reduce classification accuracy.

Liu Yu, Attinger Daniel, De Brabanter Kris

2020-Jan-16

bloodstain pattern analysis, classification, feature engineering, forensic science, gunshot spatters, image analysis, impact spatters, machine learning, random forests, spatter pattern

General General

Deep learning-based automated speech detection as a marker of social functioning in late-life depression.

In Psychological medicine ; h5-index 82.0

BACKGROUND : Late-life depression (LLD) is associated with poor social functioning. However, previous research uses bias-prone self-report scales to measure social functioning and a more objective measure is lacking. We tested a novel wearable device to measure speech that participants encounter as an indicator of social interaction.

METHODS : Twenty nine participants with LLD and 29 age-matched controls wore a wrist-worn device continuously for seven days, which recorded their acoustic environment. Acoustic data were automatically analysed using deep learning models that had been developed and validated on an independent speech dataset. Total speech activity and the proportion of speech produced by the device wearer were both detected whilst maintaining participants' privacy. Participants underwent a neuropsychological test battery and clinical and self-report scales to measure severity of depression, general and social functioning.

RESULTS : Compared to controls, participants with LLD showed poorer self-reported social and general functioning. Total speech activity was much lower for participants with LLD than controls, with no overlap between groups. The proportion of speech produced by the participants was smaller for LLD than controls. In LLD, both speech measures correlated with attention and psychomotor speed performance but not with depression severity or self-reported social functioning.

CONCLUSIONS : Using this device, LLD was associated with lower levels of speech than controls and speech activity was related to psychomotor retardation. We have demonstrated that speech activity measured by wearable technology differentiated LLD from controls with high precision and, in this study, provided an objective measure of an aspect of real-world social functioning in LLD.

Little Bethany, Alshabrawy Ossama, Stow Daniel, Ferrier I Nicol, McNaney Roisin, Jackson Daniel G, Ladha Karim, Ladha Cassim, Ploetz Thomas, Bacardit Jaume, Olivier Patrick, Gallagher Peter, O’Brien John T

2020-Jan-16

Ageing, deep learning, late-life depression, social functioning, speech, wearable technology

Radiology Radiology

Predicting Survival after Transarterial Chemoembolization for Hepatocellular Carcinoma Using a Neural Network: A Pilot Study.

In Liver international : official journal of the International Association for the Study of the Liver ; h5-index 0.0

BACKGROUND&AIMS : Deciding when to repeat and when to stop transarterial chemoembolization (TACE) in patients with hepatocellular carcinoma (HCC) can be difficult even for experienced investigators. Our aim was to develop a survival prediction model for such patients undergoing TACE using novel machine learning algorithms and to compare it to conventional prediction scores, ART, ABCR, and SNACOR.

METHODS : For this retrospective analysis, 282 patients who underwent TACE for HCC at our tertiary referral center between January 2005 and December 2017 were included in the final analysis. We built an artificial neural network (ANN) including all parameters used by the aforementioned risk scores and other clinically meaningful parameters. Following an 80:20 split, the first 225 patients were used for training; the more recently treated 20% were used for validation.

RESULTS : The ANN had a promising performance at predicting 1-year survival, with an area under the ROC curve (AUC) of 0.77±0.13. Internal validation yielded an AUC of 0.83±0.06, a positive predictive value of 87.5%, and a negative predictive value of 68.0%. The sensitivity was 77.8% and specificity 81.0%. In a head-to-head comparison, the ANN outperformed the aforementioned scoring systems, which yielded lower AUCs (SNACOR 0.73±0.07, ABCR 0.70±0.07, and ART 0.54±0.08). This difference reached significance for ART (p<0.001); for ABCR and SNACOR significance was not reached (p=0.143 and p=0.201).

CONCLUSIONS : ANNs could be better at predicting patient survival after TACE for HCC than traditional scoring systems. Once established, such prediction models could easily be deployed in clinical routine and help determine optimal patient care.

Mähringer-Kunz Aline, Wagner Franziska, Hahn Felix, Weinmann Arndt, Brodehl Sebastian, Schotten Sebastian, Hinrichs Jan Bernd, Düber Christoph, Galle Peter Robert, Pinto Dos Santos Daniel, Kloeckner Roman

2020-Jan-13

Hepatocellular carcinoma, chemoembolization, diagnostic accuracy study, neural network

General General

Prediction of progression from pre-diabetes to diabetes: Development and validation of a machine learning model.

In Diabetes/metabolism research and reviews ; h5-index 0.0

AIMS : Identification, a priori, of those at high risk of progression from pre-diabetes to diabetes may enable targeted delivery of interventional programmes while avoiding the burden of prevention and treatment in those at low risk. We studied whether the use of a machine-learning model can improve the prediction of incident diabetes utilizing patient data from electronic medical records.

METHODS : A machine-learning model predicting the progression from pre-diabetes to diabetes was developed using a gradient boosted trees model. The model was trained on data from The Health Improvement Network (THIN) database cohort, internally validated on THIN data not used for training, and externally validated on the Canadian AppleTree and the Israeli Maccabi Health Services (MHS) data sets. The model's predictive ability was compared with that of a logistic-regression model within each data set.

RESULTS : A cohort of 852 454 individuals with pre-diabetes (glucose ≥ 100 mg/dL and/or HbA1c ≥ 5.7) was used for model training including 4.9 million time points using 900 features. The full model was eventually implemented using 69 variables, generated from 11 basic signals. The machine-learning model demonstrated superiority over the logistic-regression model, which was maintained at all sensitivity levels - comparing AUC [95% CI] between the models; in the THIN data set (0.865 [0.860,0.869] vs 0.778 [0.773,0.784] P < .05), the AppleTree data set (0.907 [0.896, 0.919] vs 0.880 [0.867, 0.894] P < .05) and the MHS data set (0.925 [0.923, 0.927] vs 0.876 [0.872, 0.879] P < .05).

CONCLUSIONS : Machine-learning models preserve their performance across populations in diabetes prediction, and can be integrated into large clinical systems, leading to judicious selection of persons for interventional programmes.

Cahn Avivit, Shoshan Avi, Sagiv Tal, Yesharim Rachel, Goshen Ran, Shalev Varda, Raz Itamar

2020-Jan-14

electronic medical records, machine learning, pre-diabetes

Radiology Radiology

Top 10 Reviewer Critiques of Radiology Artificial Intelligence (AI) Articles: Qualitative Thematic Analysis of Reviewer Critiques of Machine Learning/Deep Learning Manuscripts Submitted to JMRI.

In Journal of magnetic resonance imaging : JMRI ; h5-index 0.0

BACKGROUND : Classical machine learning (ML) and deep learning (DL) articles have rapidly captured the attention of the radiology research community and comprise an increasing proportion of articles submitted to JMRI, of variable reporting and methodological quality.

PURPOSE : To identify the most frequent reviewer critiques of classical ML and DL articles submitted to JMRI.

STUDY TYPE : Qualitative thematic analysis.

POPULATION : In all, 1396 manuscript journal articles submitted to JMRI for consideration in 2018, with thematic analysis performed of reviewer critiques of 38 artificial intelligence (AI) articles, comprised of 24 ML and 14 DL articles, from January 9, 2018 to June 2, 2018.

FIELD STRENGTH/SEQUENCE : N/A.

ASSESSMENT : After identifying and sampling ML and DL articles, and collecting all reviews, qualitative thematic analysis was performed to identify major and minor themes of reviewer critiques.

STATISTICAL TESTS : Descriptive statistics provided of article characteristics, and thematic review of major and minor themes.

RESULTS : Thirty-eight articles were sampled for thematic review: 24 (63.2%) focused on classical ML and 14 (36.8%) on DL. The overall acceptance rate of classical ML/DL articles was 28.9%, similar to the overall 2017-2019 acceptance rate of 23.1-28.1%. These articles resulted in 72 reviews analyzed, yielding a total 713 critiques that underwent formal thematic analysis consensus encoding. Ten major themes of critiques were identified, with 1-Lack of Information as the most frequent, comprising 268 (37.6%) of all critiques. Frequent minor themes of critiques concerning ML/DL-specific recommendations included performing basic clinical statistics such as to ensure similarity of training and test groups (N = 26), emphasizing strong clinical Gold Standards for the basis of training labels (N = 19), and ensuring strong radiological relevance of the topic and task performed (N = 16).

DATA CONCLUSION : Standardized reporting of ML and DL methods could help address nearly one-third of all reviewer critiques made.

LEVEL OF EVIDENCE : 4 Technical Efficacy Stage: 1 J. Magn. Reson. Imaging 2020.

Gregory Jules, Welliver Sara, Chong Jaron

2020-Jan-13

artificial intelligence, machine learning, thematic analysis

Radiology Radiology

Rapid dealiasing of undersampled, non-Cartesian cardiac perfusion images using U-net.

In NMR in biomedicine ; h5-index 41.0

Compressed sensing (CS) is a promising method for accelerating cardiac perfusion MRI to achieve clinically acceptable image quality with high spatial resolution (1.6 × 1.6 × 8 mm3 ) and extensive myocardial coverage (6-8 slices per heartbeat). A major disadvantage of CS is its relatively lengthy processing time (~8 min per slice with 64 frames using a graphics processing unit), thereby making it impractical for clinical translation. The purpose of this study was to implement and test whether an image reconstruction pipeline including a neural network is capable of reconstructing 6.4-fold accelerated, non-Cartesian (radial) cardiac perfusion k-space data at least 10 times faster than CS, without significant loss in image quality. We implemented a 3D (2D + time) U-Net and trained it with 132 2D + time datasets (coil combined, zero filled as input; CS reconstruction as reference) with 64 time frames from 28 patients (8448 2D images in total). For testing, we used 56 2D + time coil-combined, zero-filled datasets (3584 2D images in total) from 12 different patients as input to our trained U-Net, and compared the resulting images with CS reconstructed images using quantitative metrics of image quality and visual scores (conspicuity of wall enhancement, noise, artifacts; each score ranging from 1 (worst) to 5 (best), with 3 defined as clinically acceptable) evaluated by readers. Including pre- and post-processing steps, compared with CS, U-Net significantly reduced the reconstruction time by 14.4-fold (32.1 ± 1.4 s for U-Net versus 461.3 ± 16.9 s for CS, p < 0.001), while maintaining high data fidelity (structural similarity index = 0.914 ± 0.023, normalized root mean square error = 1.7 ± 0.3%, identical mean edge sharpness of 1.2 mm). The median visual summed score was not significantly different (p = 0.053) between CS (14; interquartile range (IQR) = 0.5) and U-Net (12; IQR = 0.5). This study shows that the proposed pipeline with a U-Net is capable of reconstructing 6.4-fold accelerated, non-Cartesian cardiac perfusion k-space data 14.4 times faster than CS, without significant loss in data fidelity or image quality.

Fan Lexiaozi, Shen Daming, Haji-Valizadeh Hassan, Naresh Nivedita K, Carr James C, Freed Benjamin H, Lee Daniel C, Kim Daniel

2020-Jan-14

U-net, cardiac perfusion, compressed sensing, deep learning

Surgery Surgery

Expression scoring of a small-nucleolar-RNA signature identified by machine learning serves as a prognostic predictor for head and neck cancer.

In Journal of cellular physiology ; h5-index 0.0

Head and neck squamous cell carcinoma (HNSCC) is a common malignancy with high mortality and poor prognosis due to a lack of predictive markers. Increasing evidence has demonstrated small nucleolar RNAs (snoRNAs) play an important role in tumorigenesis. The aim of this study was to identify a prognostic snoRNA signature of HNSCC. Survival-related snoRNAs were screened by Cox regression analysis (univariate, least absolute shrinkage and selection operator, and multivariate). The predictive value was validated in different subgroups. The biological functions were explored by coexpression analysis and gene set enrichment analysis (GSEA). One hundred and thirteen survival-related snoRNAs were identified, and a five-snoRNA signature predicted prognosis with high sensitivity and specificity. Furthermore, the signature was applicable to patients of different sexes, ages, stages, grades, and anatomic subdivisions. Coexpression analysis and GSEA revealed the five-snoRNA are involved in regulating malignant phenotype and DNA/RNA editing. This five-snoRNA signature is not only a promising predictor of prognosis and survival but also a potential biomarker for patient stratification management.

Xing Lu, Zhang Xiaoqi, Zhang Xiaoqian, Tong Dongdong

2020-Jan-14

biomarker, head and neck squamous cell carcinoma, noncoding RNA., prognosis, snoRNA, survival

Public Health Public Health

Dynamic dengue haemorrhagic fever calculators as clinical decision support tools in adult dengue.

In Transactions of the Royal Society of Tropical Medicine and Hygiene ; h5-index 31.0

BACKGROUND : The objective of this study was to develop multiple prediction tools that calculate the risk of developing dengue haemorrhagic fever.

METHODS : Training data consisted of 1771 individuals from 2006-2008 admitted with dengue fever whereby 304 developed dengue haemorrhagic fever during hospitalisation. Least absolute shrinkage and selection operator regression was used to construct three types of calculators, static admission calculators and dynamic calculators that predict the risk of developing dengue haemorrhagic fever for a subsequent day (DAily Risk Tomorrow [DART]) or for any future point after a specific day since fever onset (DAily Risk Ever [DARE]).

RESULTS : From 119 admission covariates, 35 were in at least one of the calculators, which reported area under the curve (AUC) values of at least 0.72. Addition of person-time data for DART improved AUC to 0.76. DARE calculators displayed a large increase in AUC to 0.79 past day 7 with the inclusion of a strong predictor, maximum temperature on day 6 since onset, indicative of a saddleback fever.

CONCLUSIONS : All calculators performed well when validated with 2005 data. Addition of daily variables further improved the accuracy. These calculators can be used in tandem to assess the risk of dengue haemorrhagic fever upon admission and updated daily to obtain more precise risk estimates.

Tan Ken Wei, Tan Ben, Thein Tun L, Leo Yee-Sin, Lye David C, Dickens Borame L, Wong Joshua Guo Xian, Cook Alex R

2020-Jan-06

Clinical Decision Support Tools, Dengue, Machine Learning

General General

A review of electronic skin: soft electronics and sensors for human health.

In Journal of materials chemistry. B ; h5-index 0.0

This article reviews several categories of electronic skins (e-skins) for monitoring signals involved in human health. It covers advanced candidate materials, compositions, structures, and integrate strategies of e-skin, focusing on stretchable and wearable electronics. In addition, this article further discusses the potential applications and expected development of e-skins. It is possible to provide a new generation of sensors which are able to introduce artificial intelligence to the clinic and daily healthcare.

Zhang Songyue, Li Shunbo, Xia Zengzilu, Cai Kaiyong

2020-Jan-16

General General

Single-Neuron Adaptive Hysteresis Compensation of Piezoelectric Actuator Based on Hebb Learning Rules.

In Micromachines ; h5-index 0.0

This paper presents an adaptive hysteresis compensation approach for a piezoelectric actuator (PEA) using single-neuron adaptive control. For a given desired trajectory, the control input to the PEA is dynamically adjusted by the error between the actual and desired trajectories using Hebb learning rules. A single neuron with self-learning and self-adaptive capabilities is a non-linear processing unit, which is ideal for time-variant systems. Based on the single-neuron control, the compensation of the PEA's hysteresis can be regarded as a process of transmitting biological neuron information. Through the error information between the actual and desired trajectories, the control input is adjusted via the weight adjustment method of neuron learning. In addition, this paper also integrates the combination of Hebb learning rules and supervised learning as teacher signals, which can quickly respond to control signals. The weights of the single-neuron controller can be constantly adjusted online to improve the control performance of the system. Experimental results show that the proposed single-neuron adaptive hysteresis compensation method can track continuous and discontinuous trajectories well. The single-neuron adaptive controller has better adaptive and self-learning performance against the rate-dependence of the PEA's hysteresis.

Qin Yanding, Duan Heng

2020-Jan-12

Hebb learning rules, hysteresis compensation, piezoelectric actuator, single-neuron adaptive control, supervised learning

General General

10th World Orphan Drug Congress (WODC) (November 12-14, 2019 - Barcelona, Spain).

In Drugs of today (Barcelona, Spain : 1998) ; h5-index 0.0

The 10th World Orphan Drug Congress (WODC), now recognized as the largest and most established European orphan drug event, took place once again November 12-14, 2019, in Barcelona, Spain. Like in previous years, the more than 600 attendees were composed of government authorities, payers, industry and patient advocacy groups as well as biotech start-ups and investors. The 2019 congress aimed to address the strategic and commercial aspects of bringing new treatments to rare disease patients. The more than 200 speakers discussed many different rare disease aspects for clinical and product development, market access and pricing, manufacture, science and strategy, and precision medicine. A co-conference on Cell and Gene Therapy was also organized. This rare disease conference addressed many different challenges in the field with numerous discussions on how to improve cross-border communications, how to better identify patients and shorten their diagnosis time using new tools such as artificial intelligence and machine learning, as well as how to improve usage of patient data and patient empowerment.

Künnemann K

2019-Dec

AVR-RD-01, AVR-RD-02, AVR-RD-03, AVR-RD-04, Artificial intelligence, Gaboxadol, HMI-102, Machine learning, Patient advocacy groups

General General

Efficient Ultrasound Image Analysis Models with Sonographer Gaze Assisted Distillation.

In Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention ; h5-index 0.0

Recent automated medical image analysis methods have attained state-of-the-art performance but have relied on memory and compute-intensive deep learning models. Reducing model size without significant loss in performance metrics is crucial for time and memory-efficient automated image-based decision-making. Traditional deep learning based image analysis only uses expert knowledge in the form of manual annotations. Recently, there has been interest in introducing other forms of expert knowledge into deep learning architecture design. This is the approach considered in the paper where we propose to combine ultrasound video with point-of-gaze tracked for expert sonographers as they scan to train memory-efficient ultrasound image analysis models. Specifically we develop teacher-student knowledge transfer models for the exemplar task of frame classification for the fetal abdomen, head, and femur. The best performing memory-efficient models attain performance within 5% of conventional models that are 1000× larger in size.

Patra Arijit, Cai Yifan, Chatelain Pierre, Sharma Harshita, Drukker Lior, Papageorghiou Aris, Noble J Alison

2019

Expert knowledge, Gaze tracking, Model compression

General General

A distributional code for value in dopamine-based reinforcement learning.

In Nature ; h5-index 368.0

Since its introduction, the reward prediction error theory of dopamine has explained a wealth of empirical phenomena, providing a unifying framework for understanding the representation of reward and value in the brain1-3. According to the now canonical theory, reward predictions are represented as a single scalar quantity, which supports learning about the expectation, or mean, of stochastic outcomes. Here we propose an account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning4-6. We hypothesized that the brain represents possible future rewards not as a single mean, but instead as a probability distribution, effectively representing multiple future outcomes simultaneously and in parallel. This idea implies a set of empirical predictions, which we tested using single-unit recordings from mouse ventral tegmental area. Our findings provide strong evidence for a neural realization of distributional reinforcement learning.

Dabney Will, Kurth-Nelson Zeb, Uchida Naoshige, Starkweather Clara Kwon, Hassabis Demis, Munos Rémi, Botvinick Matthew

2020-Jan-15

General General

Improved protein structure prediction using potentials from deep learning.

In Nature ; h5-index 368.0

Protein structure prediction can be used to determine the three-dimensional shape of a protein from its amino acid sequence1. This problem is of fundamental importance as the structure of a protein largely determines its function2; however, protein structures can be difficult to determine experimentally. Considerable progress has recently been made by leveraging genetic information. It is possible to infer which amino acid residues are in contact by analysing covariation in homologous sequences, which aids in the prediction of protein structures3. Here we show that we can train a neural network to make accurate predictions of the distances between pairs of residues, which convey more information about the structure than contact predictions. Using this information, we construct a potential of mean force4 that can accurately describe the shape of a protein. We find that the resulting potential can be optimized by a simple gradient descent algorithm to generate structures without complex sampling procedures. The resulting system, named AlphaFold, achieves high accuracy, even for sequences with fewer homologous sequences. In the recent Critical Assessment of Protein Structure Prediction5 (CASP13)-a blind assessment of the state of the field-AlphaFold created high-accuracy structures (with template modelling (TM) scores6 of 0.7 or higher) for 24 out of 43 free modelling domains, whereas the next best method, which used sampling and contact information, achieved such accuracy for only 14 out of 43 domains. AlphaFold represents a considerable advance in protein-structure prediction. We expect this increased accuracy to enable insights into the function and malfunction of proteins, especially in cases for which no structures for homologous proteins have been experimentally determined7.

Senior Andrew W, Evans Richard, Jumper John, Kirkpatrick James, Sifre Laurent, Green Tim, Qin Chongli, Žídek Augustin, Nelson Alexander W R, Bridgland Alex, Penedones Hugo, Petersen Stig, Simonyan Karen, Crossan Steve, Kohli Pushmeet, Jones David T, Silver David, Kavukcuoglu Koray, Hassabis Demis

2020-Jan-15

Radiology Radiology

Improved small blob detection in 3D images using jointly constrained deep learning and Hessian analysis.

In Scientific reports ; h5-index 158.0

Imaging biomarkers are being rapidly developed for early diagnosis and staging of disease. The development of these biomarkers requires advances in both image acquisition and analysis. Detecting and segmenting objects from images are often the first steps in quantitative measurement of these biomarkers. The challenges of detecting objects in images, particularly small objects known as blobs, include low image resolution, image noise and overlap between the blobs. The Difference of Gaussian (DoG) detector has been used to overcome these challenges in blob detection. However, the DoG detector is susceptible to over-detection and must be refined for robust, reproducible detection in a wide range of medical images. In this research, we propose a joint constraint blob detector from U-Net, a deep learning model, and Hessian analysis, to overcome these problems and identify true blobs from noisy medical images. We evaluate this approach, UH-DoG, using a public 2D fluorescent dataset for cell nucleus detection and a 3D kidney magnetic resonance imaging dataset for glomerulus detection. We then compare this approach to methods in the literature. While comparable to the other four comparing methods on recall, the UH-DoG outperforms them on both precision and F-score.

Xu Yanzhe, Wu Teresa, Gao Fei, Charlton Jennifer R, Bennett Kevin M

2020-Jan-15

General General

Rumor Propagation is Amplified by Echo Chambers in Social Media.

In Scientific reports ; h5-index 158.0

Spreading rumors on the Internet has become increasingly pervasive due to the proliferation of online social media. This paper investigates how rumors are amplified by a group of users who share similar interests or views, dubbed as an echo chamber. To this end, we identify and analyze 'rumor' echo chambers, each of which is a group of users who have participated in propagating common rumors. By collecting and analyzing 125 recent rumors from six popular fact-checking sites, and their associated 289,202 tweets/retweets generated by 176,362 users, we find that the rumors that are spread by rumor echo chamber members tend to be more viral and quickly propagated than those that are not spread by echo chamber members. We propose the notion of an echo chamber network that represents relations among rumor echo chambers. By identifying the hub rumor echo chambers (in terms of connectivity to other rumor echo chambers) in the echo chamber network, we show that the top 10% of hub rumor echo chambers contribute to propagation of 24% rumors by eliciting more than 36% of retweets, implying that core rumor echo chambers significantly contribute to rumor spreads.

Choi Daejin, Chun Selin, Oh Hyunchul, Han Jinyoung, Kwon Ted Taekyoung

2020-Jan-15

General General

Machine Learning Classifies Core and Outer Fucosylation of N-Glycoproteins Using Mass Spectrometry.

In Scientific reports ; h5-index 158.0

Protein glycosylation is known to be involved in biological progresses such as cell recognition, growth, differentiation, and apoptosis. Fucosylation of glycoproteins plays an important role for structural stability and function of N-linked glycoproteins. Although many of biological and clinical studies of protein fucosylation by fucosyltransferases has been reported, structural classification of fucosylated N-glycoproteins such as core or outer isoforms remains a challenge. Here, we report for the first time the classification of N-glycopeptides as core- and outer-fucosylated types using tandem mass spectrometry (MS/MS) and machine learning algorithms such as the deep neural network (DNN) and support vector machine (SVM). Training and test sets of more than 800 MS/MS spectra of N-glycopeptides from the immunoglobulin gamma and alpha 1-acid-glycoprotein standards were selected for classification of the fucosylation types using supervised learning models. The best-performing model had an accuracy of more than 99% against manual characterization and area under the curve values greater than 0.99, which were calculated by probability scores from target and decoy datasets. Finally, this model was applied to classify fucosylated N-glycoproteins from human plasma. A total of 82N-glycopeptides, with 54 core-, 24 outer-, and 4 dual-fucosylation types derived from 54 glycoproteins, were commonly classified as the same type in both the DNN and SVM. Specifically, outer fucosylation was dominant in tri- and tetra-antennary N-glycopeptides, while core fucosylation was dominant in the mono-, bi-antennary and hybrid types of N-glycoproteins in human plasma. Thus, the machine learning methods can be combined with MS/MS to distinguish between different isoforms of fucosylated N-glycopeptides.

Hwang Heeyoun, Jeong Hoi Keun, Lee Hyun Kyoung, Park Gun Wook, Lee Ju Yeon, Lee Soo Youn, Kang Young-Mook, An Hyun Joo, Kang Jeong Gu, Ko Jeong-Heon, Kim Jin Young, Yoo Jong Shin

2020-Jan-15

General General

A multimodal neuroimaging classifier for alcohol dependence.

In Scientific reports ; h5-index 158.0

With progress in magnetic resonance imaging technology and a broader dissemination of state-of-the-art imaging facilities, the acquisition of multiple neuroimaging modalities is becoming increasingly feasible. One particular hope associated with multimodal neuroimaging is the development of reliable data-driven diagnostic classifiers for psychiatric disorders, yet previous studies have often failed to find a benefit of combining multiple modalities. As a psychiatric disorder with established neurobiological effects at several levels of description, alcohol dependence is particularly well-suited for multimodal classification. To this aim, we developed a multimodal classification scheme and applied it to a rich neuroimaging battery (structural, functional task-based and functional resting-state data) collected in a matched sample of alcohol-dependent patients (N = 119) and controls (N = 97). We found that our classification scheme yielded 79.3% diagnostic accuracy, which outperformed the strongest individual modality - grey-matter density - by 2.7%. We found that this moderate benefit of multimodal classification depended on a number of critical design choices: a procedure to select optimal modality-specific classifiers, a fine-grained ensemble prediction based on cross-modal weight matrices and continuous classifier decision values. We conclude that the combination of multiple neuroimaging modalities is able to moderately improve the accuracy of machine-learning-based diagnostic classification in alcohol dependence.

Guggenmos Matthias, Schmack Katharina, Veer Ilya M, Lett Tristram, Sekutowicz Maria, Sebold Miriam, Garbusow Maria, Sommer Christian, Wittchen Hans-Ulrich, Zimmermann Ulrich S, Smolka Michael N, Walter Henrik, Heinz Andreas, Sterzer Philipp

2020-Jan-15

General General

Long-lead Prediction of ENSO Modoki Index using Machine Learning algorithms.

In Scientific reports ; h5-index 158.0

The focus of this study is to evaluate the efficacy of Machine Learning (ML) algorithms in the long-lead prediction of El Niño (La Niña) Modoki (ENSO Modoki) index (EMI). We evaluated two widely used non-linear ML algorithms namely Support Vector Regression (SVR) and Random Forest (RF) to forecast the EMI at various lead times, viz. 6, 12, 18 and 24 months. The predictors for the EMI are identified using Kendall's tau correlation coefficient between the monthly EMI index and the monthly anomalies of the slowly varying climate variables such as sea surface temperature (SST), sea surface height (SSH) and soil moisture content (SMC). The importance of each of the predictors is evaluated using the Supervised Principal Component Analysis (SPCA). The results indicate both SVR and RF to be capable of forecasting the phase of the EMI realistically at both 6-months and 12-months lead times though the amplitude of the EMI is underestimated for the strong events. The analysis also indicates the SVR to perform better than the RF method in forecasting the EMI.

Pal Manali, Maity Rajib, Ratnam J V, Nonaka Masami, Behera Swadhin K

2020-Jan-15

Ophthalmology Ophthalmology

Quantification of Retinal Nerve Fibre Layer Thickness on Optical Coherence Tomography with a Deep Learning Segmentation-Free Approach.

In Scientific reports ; h5-index 158.0

This study describes a segmentation-free deep learning (DL) algorithm for measuring retinal nerve fibre layer (RNFL) thickness on spectral-domain optical coherence tomography (SDOCT). The study included 25,285 B-scans from 1,338 eyes of 706 subjects. Training was done to predict RNFL thickness from raw unsegmented scans using conventional RNFL thickness measurements from good quality images as targets, forcing the DL algorithm to learn its own representation of RNFL. The algorithm was tested in three different sets: (1) images without segmentation errors or artefacts, (2) low-quality images with segmentation errors, and (3) images with other artefacts. In test set 1, segmentation-free RNFL predictions were highly correlated with conventional RNFL thickness (r = 0.983, P < 0.001). In test set 2, segmentation-free predictions had higher correlation with the best available estimate (tests with good quality taken in the same date) compared to those from the conventional algorithm (r = 0.972 vs. r = 0.829, respectively; P < 0.001). Segmentation-free predictions were also better in test set 3 (r = 0.940 vs. r = 0.640, P < 0.001). In conclusion, a novel segmentation-free algorithm to extract RNFL thickness performed similarly to the conventional method in good quality images and better in images with errors or other artefacts.

Mariottoni Eduardo B, Jammal Alessandro A, Urata Carla N, Berchuck Samuel I, Thompson Atalie C, Estrela Tais, Medeiros Felipe A

2020-Jan-15

General General

Predicting high-risk opioid prescriptions before they are given.

In Proceedings of the National Academy of Sciences of the United States of America ; h5-index 0.0

Misuse of prescription opioids is a leading cause of premature death in the United States. We use state government administrative data and machine learning methods to examine whether the risk of future opioid dependence, abuse, or poisoning can be predicted in advance of an initial opioid prescription. Our models accurately predict these outcomes and identify particular prior nonopioid prescriptions, medical history, incarceration, and demographics as strong predictors. Using our estimates, we simulate a hypothetical policy which restricts new opioid prescriptions to only those with low predicted risk. The policy's potential benefits likely outweigh costs across demographic subgroups, even for lenient definitions of "high risk." Our findings suggest new avenues for prevention using state administrative data, which could aid providers in making better, data-informed decisions when weighing the medical benefits of opioid therapy against the risks.

Hastings Justine S, Howison Mark, Inman Sarah E

2020-Jan-14

administrative data, evidence-based policy, machine learning, opioids, predictive modeling

General General

Tracking activity patterns of a multispecies community of gymnotiform weakly electric fish in their neotropical habitat without tagging.

In The Journal of experimental biology ; h5-index 0.0

Field studies on freely behaving animals commonly require tagging and often are focused on single species. Weakly electric fish generate a species- and individual-specific electric organ discharge (EOD) and therefore provide a unique opportunity for individual tracking without tagging. We here present and test tracking algorithms based on recordings with submerged electrode arrays. Harmonic structures extracted from power spectra provide fish identity. Localization of fish based on weighted averages of their EOD amplitudes is found to be more robust than fitting a dipole model. We apply these techniques to monitor a community of three species, Apteronotus rostratus, Eigenmannia humboldtii, and Sternopygus dariensis, in their natural habitat in Darién, Panamá. We found consistent upstream movements after sunset followed by downstream movements in the second half of the night. Extrapolations of these movements and estimates of fish density obtained from additional transect data suggest that some fish cover at least several hundreds of meters of the stream per night. Most fish, including Eigenmannia, were traversing the electrode array solitarily. From in-situ measurements of the decay of the EOD amplitude with distance of individual animals we estimated that fish can detect conspecifics at distances of up to 2 m. Our recordings also emphasize the complexity of natural electrosensory scenes resulting from the interactions of the EODs of different species. Electrode arrays thus provide an unprecedented window into the so-far hidden nocturnal activities of multispecies communities of weakly electric fish at an unmatched level of detail.

Henninger Jörg, Krahe Rüdiger, Sinz Fabian, Benda Jan

2020-Jan-14

Animal behavior, Electrosensory scenes, Localization, Movements, Nocturnal, Tracking, Weakly electric fish

General General

Prediction of Myopia in Adolescents through Machine Learning Methods.

In International journal of environmental research and public health ; h5-index 73.0

According to literature, myopia has become the second most common eye disease in China, and the incidence of myopia is increasing year by year, and showing a trend of younger age. Previous researches have shown that the occurrence of myopia is mainly determined by poor eye habits, including reading and writing posture, eye length, and so on, and parents' heredity. In order to better prevent myopia in adolescents, this paper studies the influence of related factors on myopia incidence in adolescents based on machine learning method. A feature selection method based on both univariate correlation analysis and multivariate correlation analysis is used to better construct a feature sub-set for model training. A method based on GBRT is provided to help fill in missing items in the original data. The prediction model is built based on SVM model. Data transformation has been used to improve the prediction accuracy. Results show that our method could achieve reasonable performance and accuracy.

Yang Xu, Chen Guo, Qian Yunchong, Wang Yuhan, Zhai Yisong, Fan Debao, Xu Yang

2020-Jan-10

artificial intelligence, correlation analysis, machine learning, myopia in adolescents

General General

Insights from a Bibliometric Analysis of Vividness and Its Links with Consciousness and Mental Imagery.

In Brain sciences ; h5-index 0.0

We performed a bibliometric analysis of the peer-reviewed literature on vividness between 1900 and 2019 indexed by the Web of Science and compared it with the same analysis of publications on consciousness and mental imagery. While we observed a similarity between the citation growth rates for publications about each of these three subjects, our analysis shows that these concepts rarely overlap (co-occur) in the literature, revealing a surprising paucity of research about these concepts taken together. A disciplinary analysis shows that the field of Psychology dominates the topic of vividness, even though the total number of publications containing that term is small and the concept occurs in several other disciplines such as Computer Science and Artificial Intelligence. The present findings suggest that without a coherent unitary framework for the use of vividness in research, important opportunities for advancing the field might be missed. In contrast, we suggest that an evidence-based framework (such as the bibliometric analytic methods as exemplified here) will help to guide research from all disciplines that are concerned with vividness and help to resolve the challenge of epistemic incommensurability amongst published research in multidisciplinary fields.

Haustein Stefanie, Vellino André, D’Angiulli Amedeo

2020-Jan-10

bibliometrics, consciousness, map of science, mental imagery, term co-occurrence, vividness

General General

Forecast of Dengue Cases in 20 Chinese Cities Based on the Deep Learning Method.

In International journal of environmental research and public health ; h5-index 73.0

Dengue fever (DF) is one of the most rapidly spreading diseases in the world, and accurate forecasts of dengue in a timely manner might help local government implement effective control measures. To obtain the accurate forecasting of DF cases, it is crucial to model the long-term dependency in time series data, which is difficult for a typical machine learning method. This study aimed to develop a timely accurate forecasting model of dengue based on long short-term memory (LSTM) recurrent neural networks while only considering monthly dengue cases and climate factors. The performance of LSTM models was compared with the other previously published models when predicting DF cases one month into the future. Our results showed that the LSTM model reduced the average the root mean squared error (RMSE) of the predictions by 12.99% to 24.91% and reduced the average RMSE of the predictions in the outbreak period by 15.09% to 26.82% as compared with other candidate models. The LSTM model achieved superior performance in predicting dengue cases as compared with other previously published forecasting models. Moreover, transfer learning (TL) can improve the generalization ability of the model in areas with fewer dengue incidences. The findings provide a more precise forecasting dengue model and could be used for other dengue-like infectious diseases.

Xu Jiucheng, Xu Keqiang, Li Zhichao, Meng Fengxia, Tu Taotian, Xu Lei, Liu Qiyong

2020-Jan-10

deep learning, dengue fever, forecast model, long short-term memory, transfer learning

General General

Machine-Learning-Assisted De Novo Design of Organic Molecules and Polymers: Opportunities and Challenges.

In Polymers ; h5-index 0.0

Organic molecules and polymers have a broad range of applications in biomedical, chemical, and materials science fields. Traditional design approaches for organic molecules and polymers are mainly experimentally-driven, guided by experience, intuition, and conceptual insights. Though they have been successfully applied to discover many important materials, these methods are facing significant challenges due to the tremendous demand of new materials and vast design space of organic molecules and polymers. Accelerated and inverse materials design is an ideal solution to these challenges. With advancements in high-throughput computation, artificial intelligence (especially machining learning, ML), and the growth of materials databases, ML-assisted materials design is emerging as a promising tool to flourish breakthroughs in many areas of materials science and engineering. To date, using ML-assisted approaches, the quantitative structure property/activity relation for material property prediction can be established more accurately and efficiently. In addition, materials design can be revolutionized and accelerated much faster than ever, through ML-enabled molecular generation and inverse molecular design. In this perspective, we review the recent progresses in ML-guided design of organic molecules and polymers, highlight several successful examples, and examine future opportunities in biomedical, chemical, and materials science fields. We further discuss the relevant challenges to solve in order to fully realize the potential of ML-assisted materials design for organic molecules and polymers. In particular, this study summarizes publicly available materials databases, feature representations for organic molecules, open-source tools for feature generation, methods for molecular generation, and ML models for prediction of material properties, which serve as a tutorial for researchers who have little experience with ML before and want to apply ML for various applications. Last but not least, it draws insights into the current limitations of ML-guided design of organic molecules and polymers. We anticipate that ML-assisted materials design for organic molecules and polymers will be the driving force in the near future, to meet the tremendous demand of new materials with tailored properties in different fields.

Chen Guang, Shen Zhiqiang, Iyer Akshay, Ghumman Umar Farooq, Tang Shan, Bi Jinbo, Chen Wei, Li Ying

2020-Jan-08

data-driven algorithm, de novo materials design, machine learning, materials database, organic molecules, polymers

General General

Detecting Suspected Pump Thrombosis in Left Ventricular Assist Devices via Acoustic Analysis.

In IEEE journal of biomedical and health informatics ; h5-index 0.0

OBJECTIVE : Left ventricular assist devices (LVADs) fail in up to 10% of patients due to the development of pump thrombosis. Remote monitoring of patients with LVADs can enable early detection and, subsequently, treatment and prevention of pump thrombosis. We assessed whether acoustical signals measured on the chest of patients with LVADs, combined with machine learning algorithms, can be used for detecting pump thrombosis.

METHODS : 13 centrifugal pump (HVAD) recipients were enrolled in the study. When hospitalized for suspected pump thrombosis, clinical data and acoustical recordings were obtained at admission, prior to and after administration of thrombolytic therapy, and every 24 hours until laboratory and pump parameters normalized. First, we selected the most important features among our feature set using LDH-based correlation analysis. Then using these features, we trained a logistic regression model and determined our decision threshold to differentiate between thrombosis and non-thrombosis episodes.

RESULTS : Accuracy, sensitivity and precision were calculated to be 88.9%, 90.9% and 83.3%, respectively. When tested on the post-thrombolysis data, our algorithm suggested possible pump abnormalities that were not identified by the reference pump power or biomarker abnormalities.

SIGNIFICANCE : We showed that the acoustical signatures of LVADs can be an index of mechanical deterioration and, when combined with machine learning algorithms, provide clinical decision support regarding the presence of pump thrombosis.

Semiz Beren, Inan Omer, Hersek Sinan, Pouyan Maziyar Baran, Partida Cynthia, Arroyo Leticia Blazquez, Selby Van, Wieselthaler Georg, Rehg James, Klein Liviu

2020-Jan-13

General General

Objective analysis of neck muscle boundaries for cervical dystonia using ultrasound imaging and deep learning.

In IEEE journal of biomedical and health informatics ; h5-index 0.0

OBJECTIVE : To provide objective visualization and pattern analysis of neck muscle boundaries to inform and monitor treatment of cervical dystonia.

METHODS : We recorded transverse cervical ultrasound (US) images and whole-body motion analysis of sixty-one standing participants (35 cervical dystonia, 26 age matched controls). We manually annotated 3,272 US images sampling posture and the functional range of pitch, yaw, and roll head movements. Using previously validated methods, we used 60-fold cross validation to train, validate and test a deep neural network (U-net) to classify pixels to 13 categories (five paired neck muscles, skin, ligamentum nuchae, vertebra). For all participants for their normal standing posture, we segmented US images and classified condition (Dystonia/Control), sex and age (higher/lower) from segment boundaries. We performed an explanatory, visualization analysis of dystonia muscle-boundaries.

RESULTS : For all segments, agreement with manual labels was Dice Coefficient (64±21%) and Hausdorff Distance (5.7±4 mm). For deep muscle layers, boundaries predicted central injection sites with average precision 94±3%. Using leave-one-out cross-validation, a support-vector-machine classified condition, sex, and age from predicted muscle boundaries at accuracy 70.5%, 67.2%, 52.4% respectively, exceeding classification by manual labels. From muscle boundaries, Dystonia clustered optimally into three sub-groups. These sub-groups are visualized and explained by three eigen-patterns which correlate significantly with truncal and head posture.

CONCLUSION : Using US, neck muscle shape alone discriminates dystonia from healthy controls.

SIGNIFICANCE : Using deep learning, US imaging allows online, automated visualization, and diagnostic analysis of cervical dystonia and segmentation of individual muscles for targeted injection. The dataset is available (DOI: 10.23634/MMUDR.00624643).

Loram Ian, Siddique Abdul, Sanchez Puccini Maria Beatriz Beatriz, Harding Peter, Silverdale Monty, Kobylecki Christopher, Cunningham Ryan James

2020-Jan-09

General General

Practices and Trends of Machine Learning Application in Nanotoxicology.

In Nanomaterials (Basel, Switzerland) ; h5-index 0.0

Machine Learning (ML) techniques have been applied in the field of nanotoxicology with very encouraging results. Adverse effects of nanoforms are affected by multiple features described by theoretical descriptors, nano-specific measured properties, and experimental conditions. ML has been proven very helpful in this field in order to gain an insight into features effecting toxicity, predicting possible adverse effects as part of proactive risk analysis, and informing safe design. At this juncture, it is important to document and categorize the work that has been carried out. This study investigates and bookmarks ML methodologies used to predict nano (eco)-toxicological outcomes in nanotoxicology during the last decade. It provides a review of the sequenced steps involved in implementing an ML model, from data pre-processing, to model implementation, model validation, and applicability domain. The review gathers and presents the step-wise information on techniques and procedures of existing models that can be used readily to assemble new nanotoxicological in silico studies and accelerates the regulation of in silico tools in nanotoxicology. ML applications in nanotoxicology comprise an active and diverse collection of ongoing efforts, although it is still in their early steps toward a scientific accord, subsequent guidelines, and regulation adoption. This study is an important bookend to a decade of ML applications to nanotoxicology and serves as a useful guide to further in silico applications.

Furxhi Irini, Murphy Finbarr, Mullins Martin, Arvanitis Athanasios, Poland Craig A

2020-Jan-08

computational, in silico, machine learning, nanoforms, nanoparticle, nanotoxicology

General General

Protein Fold Recognition by Combining Support Vector Machines and Pairwise Sequence Similarity Scores.

In IEEE/ACM transactions on computational biology and bioinformatics ; h5-index 0.0

Protein fold recognition is one of the most essential steps for protein structure prediction, aiming to classify proteins into known protein folds. There are two main computational approaches: one is template-based method based on the alignment scores between query-template protein pairs and the other is machine learning method based on the feature representation and machine learning classifier. Can we combine these methods to establish more accurate predictors for protein fold recognition? In this study, we made an initial attempt and proposed two novel algorithms: TSVM-fold and ESVM-fold. TSVM-fold was based on the Support Vector Machines (SVMs), which utilizes a set of pairwise sequence similarity scores generated by three complementary template-based methods, including HHblits, SPARKS-X, and DeepFR. These scores measured the global relationships between query sequences and templates. The comprehensive features of the attributes of the sequences were fed into the SVMs for the prediction. Then the TSVM-fold was further combined with the HHblits algorithm so as to improve its generalization ability. The combined method is called ESVM-fold. Experimental results in two rigorous benchmark datasets (LE and YK datasets) showed that the proposed methods outperform some state-of-the-art methods, indicating that the TSVM-fold and ESVM-fold are efficient predictors for protein fold recognition.

Yan Ke, Wen Jie, Liu Jin-Xing, Xu Yong, Liu Bin

2020-Jan-13

General General

A Deep Learning Framework for Assessing Physical Rehabilitation Exercises.

In IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society ; h5-index 0.0

Computer-aided assessment of physical rehabilitation entails evaluation of patient performance in completing prescribed rehabilitation exercises, based on processing movement data captured with a sensory system. Despite the essential role of rehabilitation assessment toward improved patient outcomes and reduced healthcare costs, existing approaches lack versatility, robustness, and practical relevance. In this paper, we propose a deep learning-based framework for automated assessment of the quality of physical rehabilitation exercises. The main components of the framework are metrics for quantifying movement performance, scoring functions for mapping the performance metrics into numerical scores of movement quality, and deep neural network models for generating quality scores of input movements via supervised learning. The proposed performance metric is defined based on the log-likelihood of a Gaussian mixture model, and encodes low-dimensional data representation obtained with a deep autoencoder network. The proposed deep spatio-temporal neural network arranges data into temporal pyramids, and exploits the spatial characteristics of human movements by using sub-networks to process joint displacements of individual body parts. The presented framework is validated using a dataset of ten rehabilitation exercises. The significance of this work is that it is the first that implements deep neural networks for assessment of rehabilitation performance.

Liao Yalin, Vakanski Aleksandar, Xian Min

2020-Jan-13

General General

Biomarker Localization from Deep Learning Regression Networks.

In IEEE transactions on medical imaging ; h5-index 74.0

Biomarker estimation methods from medical images have traditionally followed a segment-and-measure strategy. Deep-learning regression networks have changed such a paradigm, enabling the direct estimation of biomarkers in databases where segmentation masks are not present. While such methods achieve high performance, they operate as a black-box. In this work, we present a novel deep learning network structure that, when trained with only the value of the biomarker, can perform biomarker regression and the generation of an accurate localization mask simultaneously, thus enabling a qualitative assessment of the image locus that relates to the quantitative result. We showcase the proposed method with three different network structures and compare their performance against direct regression networks in four different problems: pectoralis muscle area (PMA), subcutaneous fat area (SFA), liver mass area in single slice computed tomography (CT), and Agatston score estimated from non-contrast thoracic CT images (CAC). Our results show that the proposed method improves the performance with respect to direct biomarker regression methods (correlation coefficient of 0.978, 0.998, and 0.950 for the proposed method in comparison to 0.971, 0.982, and 0.936 for the reference regression methods on PMA, SFA and CAC respectively) while achieving good localization (DICE coefficients of 0.875, 0.914 for PMA and SFA respectively, p < 0.05 for all pairs). We observe the same improvement in regression results comparing the proposed method with those obtained by quantify the outputs using an U-Net segmentation network (0.989 and 0.951 respectively). We, therefore, conclude that it is possible to obtain simultaneously good biomarker regression and localization when training biomarker regression networks using only the biomarker value.

Cano-Espinosa Carlos, Gonzalez German, Washko George R, Cazorla Miguel, Estepar Raul San Jose

2020-Jan-09

General General

Multi-target dopamine D3 receptor modulators: Actionable knowledge for drug design from molecular dynamics and machine learning.

In European journal of medicinal chemistry ; h5-index 72.0

Local changes in the structure of G-protein coupled receptors (GPCR) binders largely affect their pharmacological profile. While the sought efficacy can be empirically obtained by introducing local modifications, the underlining structural explanation can remain elusive. Here, molecular dynamics (MD) simulations of the eticlopride-bound inactive state of the Dopamine D3 Receptor (D3DR) have been clustered using a machine learning-based approach in the attempt to rationalize the efficacy change in four congeneric modulators. Accumulating extended MD trajectories of receptor-ligand complexes, we observed how the increase in ligand flexibility progressively destabilized the crystal structure of the inactivated receptor. To prospectively validate this model, a partial agonist was rationally designed based on structural insights and computational modeling, and eventually synthesized and tested. Results turned out to be in line with the predictions. This case study suggests that the investigation of ligand flexibility in the framework of extended MD simulations can assist and inform drug design strategies, highlighting its potential role as a powerful in silico counterpart to functional assays.

Ferraro Mariarosaria, Decherchi Sergio, De Simone Alessio, Recanatini Maurizio, Cavalli Andrea, Bottegoni Giovanni

2019-Dec-23

Dopamine D3 receptor, Drug design, GPCR, MTDL, Machine learning, Molecular dynamics, Molecular recognition, Mutil-target, Polypharmacology

Pathology Pathology

Mouse Ovarian Cancer Models Recapitulate the Human Tumor Microenvironment and Patient Response to Treatment.

In Cell reports ; h5-index 119.0

Although there are many prospective targets in the tumor microenvironment (TME) of high-grade serous ovarian cancer (HGSOC), pre-clinical testing is challenging, especially as there is limited information on the murine TME. Here, we characterize the TME of six orthotopic, transplantable syngeneic murine HGSOC lines established from genetic models and compare these to patient biopsies. We identify significant correlations between the transcriptome, host cell infiltrates, matrisome, vasculature, and tissue modulus of mouse and human TMEs, with several stromal and malignant targets in common. However, each model shows distinct differences and potential vulnerabilities that enabled us to test predictions about response to chemotherapy and an anti-IL-6 antibody. Using machine learning, the transcriptional profiles of the mouse tumors that differed in chemotherapy response are able to classify chemotherapy-sensitive and -refractory patient tumors. These models provide useful pre-clinical tools and may help identify subgroups of HGSOC patients who are most likely to respond to specific therapies.

Maniati Eleni, Berlato Chiara, Gopinathan Ganga, Heath Owen, Kotantaki Panoraia, Lakhani Anissa, McDermott Jacqueline, Pegrum Colin, Delaine-Smith Robin M, Pearce Oliver M T, Hirani Priyanka, Joy Joash D, Szabova Ludmila, Perets Ruth, Sansom Owen J, Drapkin Ronny, Bailey Peter, Balkwill Frances R

2020-Jan-14

matrisome, mouse model, ovarian cancer, serous, tumor microenvironment

General General

Machine learning models for identifying preterm infants at risk of cerebral hemorrhage.

In PloS one ; h5-index 176.0

Intracerebral hemorrhage in preterm infants is a major cause of brain damage and cerebral palsy. The pathogenesis of cerebral hemorrhage is multifactorial. Among the risk factors are impaired cerebral autoregulation, infections, and coagulation disorders. Machine learning methods allow the identification of combinations of clinical factors to best differentiate preterm infants with intra-cerebral bleeding and the development of models for patients at risk of cerebral hemorrhage. In the current study, a Random Forest approach is applied to develop such models for extremely and very preterm infants (23-30 weeks gestation) based on data collected from a cohort of 229 individuals. The constructed models exhibit good prediction accuracy and might be used in clinical practice to reduce the risk of cerebral bleeding in prematurity.

Turova Varvara, Sidorenko Irina, Eckardt Laura, Rieger-Fackeldey Esther, Felderhoff-Müser Ursula, Alves-Pinto Ana, Lampe Renée

2020

General General

Evaluating risk prediction models for adults with heart failure: A systematic literature review.

In PloS one ; h5-index 176.0

BACKGROUND : The ability to predict risk allows healthcare providers to propose which patients might benefit most from certain therapies, and is relevant to payers' demands to justify clinical and economic value. To understand the robustness of risk prediction models for heart failure (HF), we conducted a systematic literature review to (1) identify HF risk-prediction models, (2) assess statistical approach and extent of validation, (3) identify common variables, and (4) assess risk of bias (ROB).

METHODS : Literature databases were searched from March 2013 to May 2018 to identify risk prediction models conducted in an out-of-hospital setting in adults with HF. Distinct risk prediction variables were ranked according to outcomes assessed and incorporation into the studies. ROB was assessed using Prediction model Risk Of Bias ASsessment Tool (PROBAST).

RESULTS : Of 4720 non-duplicated citations, 40 risk-prediction publications were deemed relevant. Within the 40 publications, 58 models assessed 55 (co)primary outcomes, including all-cause mortality (n = 17), cardiovascular death (n = 9), HF hospitalizations (n = 15), and composite endpoints (n = 14). Few publications reported detail on handling missing data (n = 11; 28%). The discriminatory ability for predicting all-cause mortality, cardiovascular death, and composite endpoints was generally better than for HF hospitalization. 105 distinct predictor variables were identified. Predictors included in >5 publications were: N-terminal prohormone brain-natriuretic peptide, creatinine, blood urea nitrogen, systolic blood pressure, sodium, NYHA class, left ventricular ejection fraction, heart rate, and characteristics including male sex, diabetes, age, and BMI. Only 11/58 (19%) models had overall low ROB, based on our application of PROBAST. In total, 26/58 (45%) models discussed internal validation, and 14/58 (24%) external validation.

CONCLUSIONS : The majority of the 58 identified risk-prediction models for HF present particular concerns according to ROB assessment, mainly due to lack of validation and calibration. The potential utility of novel approaches such as machine learning tools is yet to be determined.

REGISTRATION NUMBER : The SLR was registered in Prospero (ID: CRD42018100709).

Di Tanna Gian Luca, Wirtz Heidi, Burrows Karen L, Globe Gary

2020

General General

Clinical state tracking in serious mental illness through computational analysis of speech.

In PloS one ; h5-index 176.0

Individuals with serious mental illness experience changes in their clinical states over time that are difficult to assess and that result in increased disease burden and care utilization. It is not known if features derived from speech can serve as a transdiagnostic marker of these clinical states. This study evaluates the feasibility of collecting speech samples from people with serious mental illness and explores the potential utility for tracking changes in clinical state over time. Patients (n = 47) were recruited from a community-based mental health clinic with diagnoses of bipolar disorder, major depressive disorder, schizophrenia or schizoaffective disorder. Patients used an interactive voice response system for at least 4 months to provide speech samples. Clinic providers (n = 13) reviewed responses and provided global assessment ratings. We computed features of speech and used machine learning to create models of outcome measures trained using either population data or an individual's own data over time. The system was feasible to use, recording 1101 phone calls and 117 hours of speech. Most (92%) of the patients agreed that it was easy to use. The individually-trained models demonstrated the highest correlation with provider ratings (rho = 0.78, p<0.001). Population-level models demonstrated statistically significant correlations with provider global assessment ratings (rho = 0.44, p<0.001), future provider ratings (rho = 0.33, p<0.05), BASIS-24 summary score, depression sub score, and self-harm sub score (rho = 0.25,0.25, and 0.28 respectively; p<0.05), and the SF-12 mental health sub score (rho = 0.25, p<0.05), but not with other BASIS-24 or SF-12 sub scores. This study brings together longitudinal collection of objective behavioral markers along with a transdiagnostic, personalized approach for tracking of mental health clinical state in a community-based clinical setting.

Arevian Armen C, Bone Daniel, Malandrakis Nikolaos, Martinez Victor R, Wells Kenneth B, Miklowitz David J, Narayanan Shrikanth

2020

General General

BrainIAK tutorials: User-friendly learning materials for advanced fMRI analysis.

In PLoS computational biology ; h5-index 0.0

Advanced brain imaging analysis methods, including multivariate pattern analysis (MVPA), functional connectivity, and functional alignment, have become powerful tools in cognitive neuroscience over the past decade. These tools are implemented in custom code and separate packages, often requiring different software and language proficiencies. Although usable by expert researchers, novice users face a steep learning curve. These difficulties stem from the use of new programming languages (e.g., Python), learning how to apply machine-learning methods to high-dimensional fMRI data, and minimal documentation and training materials. Furthermore, most standard fMRI analysis packages (e.g., AFNI, FSL, SPM) focus on preprocessing and univariate analyses, leaving a gap in how to integrate with advanced tools. To address these needs, we developed BrainIAK (brainiak.org), an open-source Python software package that seamlessly integrates several cutting-edge, computationally efficient techniques with other Python packages (e.g., Nilearn, Scikit-learn) for file handling, visualization, and machine learning. To disseminate these powerful tools, we developed user-friendly tutorials (in Jupyter format; https://brainiak.org/tutorials/) for learning BrainIAK and advanced fMRI analysis in Python more generally. These materials cover techniques including: MVPA (pattern classification and representational similarity analysis); parallelized searchlight analysis; background connectivity; full correlation matrix analysis; inter-subject correlation; inter-subject functional connectivity; shared response modeling; event segmentation using hidden Markov models; and real-time fMRI. For long-running jobs or large memory needs we provide detailed guidance on high-performance computing clusters. These notebooks were successfully tested at multiple sites, including as problem sets for courses at Yale and Princeton universities and at various workshops and hackathons. These materials are freely shared, with the hope that they become part of a pool of open-source software and educational materials for large-scale, reproducible fMRI analysis and accelerated discovery.

Kumar Manoj, Ellis Cameron T, Lu Qihong, Zhang Hejia, Capotă Mihai, Willke Theodore L, Ramadge Peter J, Turk-Browne Nicholas B, Norman Kenneth A

2020-Jan

General General

AI, Machine Learning, and Ethics in Health Care.

In The Journal of legal medicine ; h5-index 0.0

 .

Johnson Sandra L J

General General

Deep Learning Techniques for Automatic Detection of Embryonic Neurodevelopmental Disorders.

In Diagnostics (Basel, Switzerland) ; h5-index 0.0

The increasing rates of neurodevelopmental disorders (NDs) are threatening pregnant women, parents, and clinicians caring for healthy infants and children. NDs can initially start through embryonic development due to several reasons. Up to three in 1000 pregnant women have embryos with brain defects; hence, the primitive detection of embryonic neurodevelopmental disorders (ENDs) is necessary. Related work done for embryonic ND classification is very limited and is based on conventional machine learning (ML) methods for feature extraction and classification processes. Feature extraction of these methods is handcrafted and has several drawbacks. Deep learning methods have the ability to deduce an optimum demonstration from the raw images without image enhancement, segmentation, and feature extraction processes, leading to an effective classification process. This article proposes a new framework based on deep learning methods for the detection of END. To the best of our knowledge, this is the first study that uses deep learning techniques for detecting END. The framework consists of four stages which are transfer learning, deep feature extraction, feature reduction, and classification. The framework depends on feature fusion. The results showed that the proposed framework was capable of identifying END from embryonic MRI images of various gestational ages. To verify the efficiency of the proposed framework, the results were compared with related work that used embryonic images. The performance of the proposed framework was competitive. This means that the proposed framework can be successively used for detecting END.

Attallah Omneya, Sharkas Maha A, Gadelkarim Heba

2020-Jan-07

MRI imaging, convolution neural networks (CNNs), machine learning, deep learning, embryonic neurodevelopment disorders

General General

Building Highly Reliable Quantitative Structure-Activity Relationship Classification Models Using the Rivality Index Neighborhood Algorithm with Feature Selection.

In Journal of chemical information and modeling ; h5-index 0.0

Dimensionality reduction of the data set representation for the construction of the quantitative structure-activity relationship classification models is an important research subject for the interpretability of the models and the computational cost efficiency of the classification algorithms. Feature selection techniques are appropriate as only a short number of relevant features should be used in the classification process because irrelevant and redundant features should be discarded, the same as the noninterpretable ones. In this paper, we propose an embedded feature selection technique for the construction of classification models using the rivality index neighborhood (RINH) algorithm. This technique uses a filter selection in the preprocessing stage considering the selectivity of the features as a selection criterion and a wrapper technique in the processing stage based on the improvement of the accuracy and reliability of the models generated using the RINH algorithm with LTN and GTN functions. The results obtained using the RINH algorithm with and without the selection of features and compared with those results obtained using 14 machine learning algorithms have demonstrated that the feature selection technique proposed in this paper is capable of clearly building more accurate and reliable models, reducing the data dimensionality around 90%, and generating high robust and interpretable models.

Ruiz Irene Luque, Gómez-Nieto Miguel Ángel

2020-Jan-15

Radiology Radiology

Machine Learning to Predict the Rapid Growth of Small Abdominal Aortic Aneurysm.

In Journal of computer assisted tomography ; h5-index 0.0

OBJECTIVE : The purpose of this study was to determine whether computed tomography (CT) angiography with machine learning (ML) can be used to predict the rapid growth of abdominal aortic aneurysm (AAA).

MATERIALS AND METHODS : This retrospective study was approved by our institutional review board. Fifty consecutive patients (45 men, 5 women, 73.5 years) with small AAA (38.5 ± 6.2 mm) had undergone CT angiography. To be included, patients required at least 2 CT scans a minimum of 6 months apart. Abdominal aortic aneurysm growth, estimated by change per year, was compared between patients with baseline infrarenal aortic minor axis. For each axial image, major axis of AAA, minor axis of AAA, major axis of lumen without intraluminal thrombi (ILT), minor axis of lumen without ILT, AAA area, lumen area without ILT, ILT area, maximum ILT area, and maximum ILT thickness were measured. We developed a prediction model using an ML method (to predict expansion >4 mm/y) and calculated the area under the receiver operating characteristic curve of this model via 10-fold cross-validation.

RESULTS : The median aneurysm expansion was 3.0 mm/y. Major axis of AAA and AAA area correlated significantly with future AAA expansion (r = 0.472, 0.416 all P < 0.01). Machine learning and major axis of AAA were a strong predictor of significant AAA expansion (>4 mm/y) (area under the receiver operating characteristic curve were 0.86 and 0.78).

CONCLUSIONS : Machine learning is an effective method for the prediction of expansion risk of AAA. Abdominal aortic aneurysm area and major axis of AAA are the important factors to reflect AAA expansion.

Hirata Kenichiro, Nakaura Takeshi, Nakagawa Masataka, Kidoh Masafumi, Oda Seitaro, Utsunomiya Daisuke, Yamashita Yasuyuki

General General

Artificial Intelligence in Anesthesiology: Current Techniques, Clinical Applications, and Limitations.

In Anesthesiology ; h5-index 71.0

Artificial intelligence has been advancing in fields including anesthesiology. This scoping review of the intersection of artificial intelligence and anesthesia research identified and summarized six themes of applications of artificial intelligence in anesthesiology: (1) depth of anesthesia monitoring, (2) control of anesthesia, (3) event and risk prediction, (4) ultrasound guidance, (5) pain management, and (6) operating room logistics. Based on papers identified in the review, several topics within artificial intelligence were described and summarized: (1) machine learning (including supervised, unsupervised, and reinforcement learning), (2) techniques in artificial intelligence (e.g., classical machine learning, neural networks and deep learning, Bayesian methods), and (3) major applied fields in artificial intelligence.The implications of artificial intelligence for the practicing anesthesiologist are discussed as are its limitations and the role of clinicians in further developing artificial intelligence for use in clinical care. Artificial intelligence has the potential to impact the practice of anesthesiology in aspects ranging from perioperative support to critical care delivery to outpatient pain management.

Hashimoto Daniel A, Witkowski Elan, Gao Lei, Meireles Ozanan, Rosman Guy

2020-Feb

General General

Generating Medical Assessments Using a Neural Network Model: Algorithm Development and Validation.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Since its inception, artificial intelligence has aimed to use computers to help make clinical diagnoses. Evidence-based medical reasoning is important for patient care. Inferring clinical diagnoses is a crucial step during the patient encounter. Previous works mainly used expert systems or machine learning-based methods to predict the International Classification of Diseases - Clinical Modification codes based on electronic health records. We report an alternative approach: inference of clinical diagnoses from patients' reported symptoms and physicians' clinical observations.

OBJECTIVE : We aimed to report a natural language processing system for generating medical assessments based on patient information described in the electronic health record (EHR) notes.

METHODS : We processed EHR notes into the Subjective, Objective, Assessment, and Plan sections. We trained a neural network model for medical assessment generation (N2MAG). Our N2MAG is an innovative deep neural model that uses the Subjective and Objective sections of an EHR note to automatically generate an "expert-like" assessment of the patient. N2MAG can be trained in an end-to-end fashion and does not require feature engineering and external knowledge resources.

RESULTS : We evaluated N2MAG and the baseline models both quantitatively and qualitatively. Evaluated by both the Recall-Oriented Understudy for Gisting Evaluation metrics and domain experts, our results show that N2MAG outperformed the existing state-of-the-art baseline models.

CONCLUSIONS : N2MAG could generate a medical assessment from the Subject and Objective section descriptions in EHR notes. Future work will assess its potential for providing clinical decision support.

Hu Baotian, Bajracharya Adarsha, Yu Hong

2020-Jan-15

artificial intelligence, deep neural network model, electronic health record note, medical assessment generation, natural language processing

oncology Oncology

CNApp, a tool for the quantification of copy number alterations and integrative analysis revealing clinical implications.

In eLife ; h5-index 0.0

Somatic copy number alterations (CNAs) are a hallmark of cancer, but their role in tumorigenesis and clinical relevance remain largely unclear. Here we developed CNApp, a web-based tool that allows a comprehensive exploration of CNAs by using purity-corrected segmented data from multiple genomic platforms. CNApp generates genome-wide profiles, computes CNA scores for broad, focal and global CNA burdens, and uses machine learning-based predictions to classify samples. We applied CNApp to the TCGA pan-cancer dataset of 10,635 genomes showing that CNAs classify cancer types according to their tissue-of-origin, and that each cancer type shows specific ranges of broad and focal CNA scores. Moreover, CNApp reproduces recurrent CNAs in hepatocellular carcinoma, and predicts colon cancer molecular subtypes and microsatellite instability based on broad CNA scores and discrete genomic imbalances. In summary, CNApp facilitates CNA-driven research by providing a unique framework to identify relevant clinical implications. CNApp is hosted at https://tools.idibaps.org/CNApp/.

Franch-Expósito Sebastià, Bassaganyas Laia, Vila-Casadesús Maria, Hernández-Illán Eva, Esteban-Fabró Roger, Díaz-Gay Marcos, Lozano Juan José, Castells Antoni, Llovet Josep Maria, Castellvi-Bel Sergi, Camps Jordi

2020-Jan-15

computational biology, genetics, genomics, human, systems biology

General General

Data-Driven Identification of Reaction Network in Oxidative Coupling of Methane Reaction via Experimental Data.

In The journal of physical chemistry letters ; h5-index 129.0

Identifying details of chemical reactions is a challenging matter for both experiments and computations. Here, the reaction pathway in oxidative couple of methane (OCM) is investigated using a series of experimental data and data science techniques where data is analyzed using a variety of visualization techniques. Data visualization, pairwise correlation, and machine learning unveil the relationships between experimental conditions and the selectivities of CO, CO2 , C2H4 , C2H6 , and H2 in the OCM reaction. More importantly, the reaction network for the OCM reaction is constructed based on the scores given through machine learning and experimental data. In particular, the proposed reaction maps not only contains the chemical compound but also contains experimental conditions. Thus, data driven identification of chemical reactions can be achieved in principle via a series of experiment data, leading towards more efficient experimental design and catalyst development.

Miyazato Itsuki, Nishimura Shun, Takahashi Lauren, Ohyama Junya, Takahashi Keisuke

2020-Jan-15

General General

Rapid Escherichia coli (E. coli) Trapping and Retrieval from Bodily Fluids via a Three-Dimensional (3D) Beads Stacked Nano-Device.

In ACS applied materials & interfaces ; h5-index 147.0

A novel micro- and nano-fluidic device stacked with magnetic beads has been developed to efficiently trap, concentrate, and retrieve Escherichia coli (E. coli) from bacterial suspension and pig plasma. The small voids between the magnetic beads are used to physically isolate the bacteria in the device. We used computational fluid dynamics (CFD), 3D tomography technology, and machine learning to probe and explain the bead stacking in a small 3D space with various flow rates. A combination of beads with different sizes is utilized to achieve a high capture efficiency (~86%) with a flow rate of 50 µL/min. Leveraging the high deformability of this device, E. coli sample can be retrieved from the designated bacterial suspension by applying a higher flow rate, followed by rapid magnetic separation. This unique function is also utilized to concentrate E. coli cells from the original bacterial suspension. An on-chip concentration factor of ~11× is achieved by inputting 1,300 µL of the E. coli sample and then concentrating it in 100 µL of buffer. Importantly, this multiplexed, miniaturized, inexpensive, and transparent device is easy to fabricate and operate, making it ideal for pathogen separation in both laboratory and point-of-care (POC) settings.

Chen Xinye, Miller Abbi, Cao Shengting, Gan Yu, Zhang Jie, He Qian, Wang Ruo-Qian, Yong Xin, Qin Peiwu, Lapizco-Encinas Blanca H, Du Ke

2020-Jan-15

Surgery Surgery

Non-intrusive Monitoring of Mental Fatigue Status Using Epidermal Electronic Systems and Machine-learning Algorithms.

In ACS sensors ; h5-index 0.0

Mental fatigue, characterized by subjective feelings of "tiredness" and "lack of energy", can degrade individual performance in a variety of situations, for example in motor vehicle driving or executing surgery. Thus, method for non-intrusive monitoring of mental fatigue status is urgently needed. Recent research shows that physiological signal-based fatigue classification methods using wearable electronics can be sufficiently accurate; by contrast, rigid, bulky devices constrain the behavior of those wearing them, potentially interfering with test signals. Recently, wearable electronics, such as epidermal electronics systems (EES) and electronic tattoos (E-tattoos), have been developed to meet the requirements for comfortable measurement of various physiological signals. However, comfortable, effective and non-intrusive monitoring of mental fatigue levels remains to be fulfilled. In this work, an EES is established to simultaneously detect multiple physiological signals in a comfortable and non-intrusive way. Machine-learning algorithms are employed to determine the mental fatigue levels and a predictive accuracy of up to 89% is achieved based on six different kinds of physiological features using decision tree algorithms. Furthermore, EES with the trained predictive model are applied to monitor in situ human mental fatigue levels when doing several routine research jobs, as well as the effect of relaxation methods in relieving fatigue.

Zeng Zhikang, Huang Zhao, Leng Kangmin, Han Wuxiao, Niu Hao, Yu Yan, Ling Qing, Liu Jihong, Wu Zhigang, Zang Jianfeng

2020-Jan-15

General General

AI-assisted exploration of superionic glass-type Li+ conductors with aromatic structures.

In Journal of the American Chemical Society ; h5-index 236.0

It has long remained challenging to predict the properties of complex chemical systems, such as polymer-based materials and their composites. We constructed currently the largest database of lithium conducting solid polymer electrolytes (104 entries) and employed a transfer learned, graph neural network to accurately predict their conductivity (mean absolute error of less than 1 in a logarithmic scale). The bias-free prediction by the network helped us to find out superionic conductors, composed of charge transfer complexes of aromatic polymers (ionic conductivity of around 10-3 S/cm at room temperature). The glassy design was against the traditional rubbery concept of polymer electrolytes, but found to be appropriate to achieve the fast, decoupled motion of ionic species from polymer chains, and to enhance thermal and mechanical stability. The unbiased suggestions by machine learning models are helpful for researches to discover unexpected chemical phenomena, which would also induce the paradigm shift of energy-related functional materials.

Hatakeyama-Sato Kan, Tezuka Toshiki, Umeki Momoka, Oyaizu Kenichi

2020-Jan-15

General General

Deep Learning of Markov Model-Based Machines for Determination of Better Treatment Option Decisions for Infertile Women.

In Reproductive sciences (Thousand Oaks, Calif.) ; h5-index 0.0

In this technical article, we are proposing ideas, that we have been developing on how machine learning and deep learning techniques can potentially assist obstetricians/gynecologists in better clinical decision-making, using infertile women in their treatment options in combination with mathematical modeling in pregnant women as examples.

Srinivasa Rao Arni S R, Diamond Michael P

2020-Jan-14

AI in medicine, Machine learning, State spaces

Surgery Surgery

The Digital Health Revolution and People with Disabilities: Perspective from the United States.

In International journal of environmental research and public health ; h5-index 73.0

This article serves as the introduction to this special issue on Mobile Health and Mobile Rehabilitation for People with Disabilities. Social, technological and policy trends are reviewed. Needs, opportunities and challenges for the emerging fields of mobile health (mHealth, aka eHealth) and mobile rehabilitation (mRehab) are discussed. Healthcare in the United States (U.S.) is at a critical juncture characterized by: (1) a growing need for healthcare and rehabilitation services; (2) maturing technological capabilities to support more effective and efficient health services; (3) evolving public policies designed, by turns, to contain cost and support new models of care; and (4) a growing need to ensure acceptance and usability of new health technologies by people with disabilities and chronic conditions, clinicians and health delivery systems. Discussion of demographic and population health data, healthcare service delivery and a public policy primarily focuses on the U.S. However, trends identified (aging populations, growing prevalence of chronic conditions and disability, labor shortages in healthcare) apply to most countries with advanced economies and others. Furthermore, technologies that enable mRehab (wearable sensors, in-home environmental monitors, cloud computing, artificial intelligence) transcend national boundaries. Remote and mobile healthcare delivery is needed and inevitable. Proactive engagement is critical to ensure acceptance and effectiveness for all stakeholders.

Jones Mike, DeRuyter Frank, Morris John

2020-Jan-07

chronic conditions, digital health, disability, health disparities, information and communication technology, mRehab, medical rehabilitation, mobile rehabilitation

Pathology Pathology

Automated Cardiovascular Pathology Assessment Using Semantic Segmentation and Ensemble Learning.

In Journal of digital imaging ; h5-index 0.0

Cardiac magnetic resonance imaging provides high spatial resolution, enabling improved extraction of important functional and morphological features for cardiovascular disease staging. Segmentation of ventricular cavities and myocardium in cardiac cine sequencing provides a basis to quantify cardiac measures such as ejection fraction. A method is presented that curtails the expense and observer bias of manual cardiac evaluation by combining semantic segmentation and disease classification into a fully automatic processing pipeline. The initial processing element consists of a robust dilated convolutional neural network architecture for voxel-wise segmentation of the myocardium and ventricular cavities. The resulting comprehensive volumetric feature matrix captures diagnostic clinical procedure data and is utilized by the final processing element to model a cardiac pathology classifier. Our approach evaluated anonymized cardiac images from a training data set of 100 patients (4 pathology groups, 1 healthy group, 20 patients per group) examined at the University Hospital of Dijon. The top average Dice index scores achieved were 0.940, 0.886, and 0.849 for structure segmentation of the left ventricle (LV), myocardium, and right ventricle (RV), respectively. A 5-ary pathology classification accuracy of 90% was recorded on an independent test set using the trained model. Performance results demonstrate the potential for advanced machine learning methods to deliver accurate, efficient, and reproducible cardiac pathological assessment.

Lindsey Tony, Lee Jin-Ju

2020-Jan-14

2D U-Net, Cardiac cine-MRI, Classification, Feature selection, Semantic segmentation

General General

Toward automatic quantification of knee osteoarthritis severity using improved Faster R-CNN.

In International journal of computer assisted radiology and surgery ; h5-index 0.0

PURPOSE : Knee osteoarthritis (OA) is a common disease that impairs knee function and causes pain. Radiologists usually review knee X-ray images and grade the severity of the impairments according to the Kellgren-Lawrence grading scheme. However, this approach becomes inefficient in hospitals with high throughput as it is time-consuming, tedious and also subjective. This paper introduces a model for automatic diagnosis of knee OA based on an end-to-end deep learning method.

METHOD : In order to process the input images with location and classification simultaneously, we use Faster R-CNN as baseline, which consists of region proposal network (RPN) and Fast R-CNN. The RPN is trained to generate region proposals, which contain knee joint and then be used by Fast R-CNN for classification. Due to the localized classification via CNNs, the useless information in X-ray images can be filtered and we can extract clinically relevant features. For the further improvement in the model's performance, we use a novel loss function whose weighting scheme allows us to address the class imbalance. Besides, larger anchors are used to overcome the problem that anchors don't match the object when increasing the input size of X-ray images.

RESULT : The performance of the proposed model is thoroughly assessed using various measures. The results show that our adjusted model outperforms the Faster R-CNN, achieving a mean average precision nearly 0.82 with a sensitivity above 78% and a specificity above 94%. It takes 0.33 s to test each image, which achieves a better trade-off between accuracy and speed.

CONCLUSION : The proposed end-to-end fully automatic model which is computationally efficient has the potential to achieve the real automatic diagnosis of knee OA and be used as computer-aided diagnosis tools in clinical applications.

Liu Bin, Luo Jianxu, Huang Huan

2020-Jan-14

Deep learning, Faster R-CNN, Focal loss, Knee osteoarthritis, X-ray

Surgery Surgery

Automated multi-model deep neural network for sleep stage scoring with unfiltered clinical data.

In Sleep & breathing = Schlaf & Atmung ; h5-index 0.0

PURPOSE : To develop an automated framework for sleep stage scoring from PSG via a deep neural network.

METHODS : An automated deep neural network was proposed by using a multi-model integration strategy with multiple signal channels as input. All of the data were collected from one single medical center from July 2017 to April 2019. Model performance was evaluated by overall classification accuracy, precision, recall, weighted F1 score, and Cohen's Kappa.

RESULTS : Two hundred ninety-four sleep studies were included in this study; 122 composed the training dataset, 20 composed the validation dataset, and 152 were used in the testing dataset. The network achieved human-level annotation performance with an average accuracy of 0.8181, weighted F1 score of 0.8150, and Cohen's Kappa of 0.7276. Top-2 accuracy (the proportion of test samples for which the true label is among the two most probable labels given by the model) was significantly improved compared to the overall classification accuracy, with the average being 0.9602. The number of arousals affected the model's performance.

CONCLUSION : This research provides a robust and reliable model with the inter-rater agreement nearing that of human experts. Determining the most appropriate evaluation parameters for sleep staging is a direction for future research.

Zhang Xiaoqing, Xu Mingkai, Li Yanru, Su Minmin, Xu Ziyao, Wang Chunyan, Kang Dan, Li Hongguang, Mu Xin, Ding Xiu, Xu Wen, Wang Xingjun, Han Demin

2020-Jan-14

Deep learning, Obstructive sleep apnea (OSA), Polysomnography (PSG), Sleep staging

Cardiology Cardiology

Physiological Assessment of Coronary Lesions in 2020.

In Current treatment options in cardiovascular medicine ; h5-index 0.0

PURPOSE OF REVIEW : Physiological assessment of coronary artery disease (CAD) is an essential component of the interventional cardiology toolbox. However, despite long-term data demonstrating improved outcomes, physiology-guided percutaneous coronary intervention (PCI) remains underutilized in current practice. This review outlines the indications and technical aspects involved in evaluating coronary stenosis physiology, focusing on the latest developments in the field.

RECENT FINDINGS : Beyond fractional flow reserve (FFR), non-hyperemic pressure ratios (NHPR) that assess coronary physiology at rest without hyperemia now abound. Additional advances in other alternative FFR approaches, including non-invasive coronary CT (FFRCT), invasive angiography (FFRangio), and optical coherence tomography (FFROCT), are being realized. Artificial intelligence algorithms and robust tools that enable detailed pre-procedure "virtual" intervention are also emerging. The benefits of coronary physiological assessment to determine lesion functional significance are well established. In addition to stable CAD, coronary physiology can be especially helpful in clinical scenarios such as left main and multivessel CAD, serial lesions, non-infarct-related arteries in acute coronary syndromes, and residual ischemia post-PCI. Today, coronary physiological assessment remains an indispensable tool in the catheterization laboratory, with an exciting technological future that will further refine clinical practice and improve patient care.

Chowdhury Mohsin, Osborn Eric A

2020-Jan-15

Angiography, Coronary artery disease, Fractional flow reserve, Hyperemia, Instantaneous wave-free ratio, Physiology

Cardiology Cardiology

Of Machines and Men: Intelligent Diagnosis and the Shape of Things to Come.

In Current hypertension reports ; h5-index 0.0

Artificial Intelligence (AI), although well established in many areas of everyday life, has only recently been trialed in the diagnosis and management of common clinical conditions. This editorial review highlights progress to date and suggests further improvements in and trials of AI in the management of conditions such as hypertension.

Cockcroft John, Avolio Alberto

2020-Jan-14

Aortic pulse wave velocity, Artificial intelligence, Blood pressure, Coronary artery disease, Haemodynamic parameters, Machine learning

General General

Breeding habitat and nest-site selection by an obligatory "nest-cleptoparasite", the Amur Falcon Falco amurensis.

In Ecology and evolution ; h5-index 0.0

The selection of a nest site is crucial for successful reproduction of birds. Animals which re-use or occupy nest sites constructed by other species often have limited choice. Little is known about the criteria of nest-stealing species to choose suitable nesting sites and habitats. Here, we analyze breeding-site selection of an obligatory "nest-cleptoparasite", the Amur Falcon Falco amurensis. We collected data on nest sites at Muraviovka Park in the Russian Far East, where the species breeds exclusively in nests of the Eurasian Magpie Pica pica. We sampled 117 Eurasian Magpie nests, 38 of which were occupied by Amur Falcons. Nest-specific variables were assessed, and a recently developed habitat classification map was used to derive landscape metrics. We found that Amur Falcons chose a wide range of nesting sites, but significantly preferred nests with a domed roof. Breeding pairs of Eurasian Hobby Falco subbuteo and Eurasian Magpie were often found to breed near the nest in about the same distance as neighboring Amur Falcon pairs. Additionally, the occurrence of the species was positively associated with bare soil cover, forest cover, and shrub patches within their home range and negatively with the distance to wetlands. Areas of wetlands and fallow land might be used for foraging since Amur Falcons mostly depend on an insect diet. Additionally, we found that rarely burned habitats were preferred. Overall, the effect of landscape variables on the choice of actual nest sites appeared to be rather small. We used different classification methods to predict the probability of occurrence, of which the Random forest method showed the highest accuracy. The areas determined as suitable habitat showed a high concordance with the actual nest locations. We conclude that Amur Falcons prefer to occupy newly built (domed) nests to ensure high nest quality, as well as nests surrounded by available feeding habitats.

Frommhold Martin, Heim Arend, Barabanov Mikhail, Maier Franziska, Mühle Ralf-Udo, Smirenski Sergei M, Heim Wieland

2019-Dec

cleptoparasitism, fire, habitat use, machine learning, magpie, nest‐site selection, random forest

General General

A machine learning based prediction system for the Indian Ocean Dipole.

In Scientific reports ; h5-index 158.0

The Indian Ocean Dipole (IOD) is a mode of climate variability observed in the Indian Ocean sea surface temperature anomalies with one pole off Sumatra and the other pole near East Africa. An IOD event starts sometime in May-June, peaks in September-October and ends in November. Through atmospheric teleconnections, it affects the climate of many parts of the world, especially that of East Africa, Australia, India, Japan, and Europe. Owing to its large impacts, previous studies have addressed the predictability of the IOD using state of the art coupled climate models. Here, for the first-time, we predict the IOD using machine learning techniques, in particular artificial neural networks (ANNs). The IOD forecasts are generated for May to November from February-April conditions. The attributes for the ANNs are derived from sea surface temperature, 850 hPa and 200 hPa geopotential height anomalies, using a correlation analysis for the period 1949-2018. An ensemble of ANN forecasts is generated using 500 samples with replacement using jackknife approach. The ensemble mean of the IOD forecasts indicates the machine learning based ANN models to be capable of forecasting the IOD index well in advance with excellent skills. The forecast skills are much superior to the skills obtained from the persistence forecasts that one would guess from the observed data. The ANN models also perform far better than the models of the North American Multi-Model Ensemble (NMME) with higher correlation coefficients and lower root mean square errors (RMSE) for all the target months of May-November.

Ratnam J V, Dijkstra H A, Behera Swadhin K

2020-Jan-14

Public Health Public Health

Deep learning, computer-aided radiography reading for tuberculosis: a diagnostic accuracy study from a tertiary hospital in India.

In Scientific reports ; h5-index 158.0

In general, chest radiographs (CXR) have high sensitivity and moderate specificity for active pulmonary tuberculosis (PTB) screening when interpreted by human readers. However, they are challenging to scale due to hardware costs and the dearth of professionals available to interpret CXR in low-resource, high PTB burden settings. Recently, several computer-aided detection (CAD) programs have been developed to facilitate automated CXR interpretation. We conducted a retrospective case-control study to assess the diagnostic accuracy of a CAD software (qXR, Qure.ai, Mumbai, India) using microbiologically-confirmed PTB as the reference standard. To assess overall accuracy of qXR, receiver operating characteristic (ROC) analysis was used to determine the area under the curve (AUC), along with 95% confidence intervals (CI). Kappa coefficients, and associated 95% CI, were used to investigate inter-rater reliability of the radiologists for detection of specific chest abnormalities. In total, 317 cases and 612 controls were included in the analysis. The AUC for qXR for the detection of microbiologically-confirmed PTB was 0.81 (95% CI: 0.78, 0.84). Using the threshold that maximized sensitivity and specificity of qXR simultaneously, the software achieved a sensitivity and specificity of 71% (95% CI: 66%, 76%) and 80% (95% CI: 77%, 83%), respectively. The sensitivity and specificity of radiologists for the detection of microbiologically-confirmed PTB was 56% (95% CI: 50%, 62%) and 80% (95% CI: 77%, 83%), respectively. For detection of key PTB-related abnormalities 'pleural effusion' and 'cavity', qXR achieved an AUC of 0.94 (95% CI: 0.92, 0.96) and 0.84 (95% CI: 0.82, 0.87), respectively. For the other abnormalities, the AUC ranged from 0.75 (95% CI: 0.70, 0.80) to 0.94 (95% CI: 0.91, 0.96). The controls had a high prevalence of other lung diseases which can cause radiological manifestations similar to PTB (e.g., 26% had pneumonia, 15% had lung malignancy, etc.). In a tertiary hospital in India, qXR demonstrated moderate sensitivity and specificity for the detection of PTB. There is likely a larger role for CAD software as a triage test for PTB at the primary care level in settings where access to radiologists in limited. Larger prospective studies that can better assess heterogeneity in important subgroups are needed.

Nash Madlen, Kadavigere Rajagopal, Andrade Jasbon, Sukumar Cynthia Amrutha, Chawla Kiran, Shenoy Vishnu Prasad, Pande Tripti, Huddart Sophie, Pai Madhukar, Saravu Kavitha

2020-Jan-14

Surgery Surgery

Early Recognition of Burn- and Trauma-Related Acute Kidney Injury: A Pilot Comparison of Machine Learning Techniques.

In Scientific reports ; h5-index 158.0

Severely burned and non-burned trauma patients are at risk for acute kidney injury (AKI). The study objective was to assess the theoretical performance of artificial intelligence (AI)/machine learning (ML) algorithms to augment AKI recognition using the novel biomarker, neutrophil gelatinase associated lipocalin (NGAL), combined with contemporary biomarkers such as N-terminal pro B-type natriuretic peptide (NT-proBNP), urine output (UOP), and plasma creatinine. Machine learning approaches including logistic regression (LR), k-nearest neighbor (k-NN), support vector machine (SVM), random forest (RF), and deep neural networks (DNN) were used in this study. The AI/ML algorithm helped predict AKI 61.8 (32.5) hours faster than the Kidney Disease and Improving Global Disease Outcomes (KDIGO) criteria for burn and non-burned trauma patients. NGAL was analytically superior to traditional AKI biomarkers such as creatinine and UOP. With ML, the AKI predictive capability of NGAL was further enhanced when combined with NT-proBNP or creatinine. The use of AI/ML could be employed with NGAL to accelerate detection of AKI in at-risk burn and non-burned trauma patients.

Rashidi Hooman H, Sen Soman, Palmieri Tina L, Blackmon Thomas, Wajda Jeffery, Tran Nam K

2020-Jan-14

General General

A bioinspired analogous nerve towards artificial intelligence.

In Nature communications ; h5-index 260.0

A bionic artificial device commonly integrates various distributed functional units to mimic the functions of biological sensory neural system, bringing intricate interconnections, complicated structure, and interference in signal transmission. Here we show an all-in-one bionic artificial nerve based on a separate electrical double-layers structure that integrates the functions of perception, recognition, and transmission. The bionic artificial nerve features flexibility, rapid response (<21 ms), high robustness, excellent durability (>10,000 tests), personalized cutability, and no energy consumption when no mechanical stimulation is being applied. The response signals are highly regionally differentiated for the mechanical stimulations, which enables the bionic artificial nerve to mimic the spatiotemporally dynamic logic of a biological neural network. Multifunctional touch interactions demonstrate the enormous potential of the bionic artificial nerve for human-machine hybrid perceptual enhancement. By incorporating the spatiotemporal resolution function and algorithmic analysis, we hope that bionic artificial nerves will promote further development of sophisticated neuroprosthetics and intelligent robotics.

Liao Xinqin, Song Weitao, Zhang Xiangyu, Yan Chaoqun, Li Tianliang, Ren Hongliang, Liu Cunzhi, Wang Yongtian, Zheng Yuanjin

2020-Jan-14

Radiology Radiology

Toward Addiction Prediction: An Overview of Cross-Validated Predictive Modeling Findings and Considerations for Future Neuroimaging Research.

In Biological psychiatry. Cognitive neuroscience and neuroimaging ; h5-index 0.0

Substance use is a leading cause of disability and death worldwide. Despite the existence of evidence-based treatments, clinical outcomes are highly variable across individuals, and relapse rates following treatment remain high. Within this context, methods to identify individuals at particular risk for unsuccessful treatment (i.e., limited within-treatment abstinence), or for relapse following treatment, are needed to improve outcomes. Cumulatively, the literature generally supports the hypothesis that individual differences in brain function and structure are linked to differences in treatment outcomes, although anatomical loci and directions of associations have differed across studies. However, this work has almost entirely used methods that may overfit the data, leading to inflated effect size estimates and reduced likelihood of reproducibility in novel clinical samples. In contrast, cross-validated predictive modeling (i.e., machine learning) approaches are designed to overcome limitations of traditional approaches by focusing on individual differences and generalization to novel subjects (i.e., cross-validation), thereby increasing the likelihood of replication and potential translation to novel clinical settings. Here, we review recent studies using these approaches to generate brain-behavior models of treatment outcomes in addictions and provide recommendations for further work using these methods.

Yip Sarah W, Kiluk Brian, Scheinost Dustin

2019-Nov-12

Abstinence, Biomarker, Classification, Connectivity, Regression, Substance use disorders

oncology Oncology

Effect of Radiation Doses to the Heart on Survival for Stereotactic Ablative Radiotherapy for Early-stage Non-Small-cell Lung Cancer: An Artificial Neural Network Approach.

In Clinical lung cancer ; h5-index 0.0

INTRODUCTION : The cardiac radiation dose is an important predictor of cardiac toxicity and overall survival (OS) for patients with locally advanced non-small-cell lung cancer (NSCLC). However, radiation-induced cardiac toxicity among patients with early-stage NSCLC who have undergone stereotactic ablative radiotherapy (SABR) has been less well-characterized. Our objective was to assess the associations between cardiac radiation dosimetry and OS in patients with early-stage NSCLC undergoing SABR.

MATERIALS AND METHODS : From 2009 to 2014, 153 patients with early-stage NSCLC had undergone SABR at a single institution. The maximum dose, mean dose, V10Gy, V25Gy, and V50Gy to 15 cardiac substructures and the whole heart were analyzed for their association with OS using the Kaplan-Meier method. An artificial neural network (ANN) analysis was performed to modulate confounding behaviors of dosimetric variables to predict for OS.

RESULTS : A total of 112 patients were included in the present analysis. The right ventricle (RV) V10Gy most negatively predicted for OS, such that patients who had received a RV V10Gy dose < 4% had significantly longer OS than patients who had received a RV V10Gy does > 4% (5.3 years vs. 2.4 years). On ANN analysis, 74 input features, including cardiac dosimetry parameters, predicted for survival with a test accuracy of 64.7%. A repeat ANN analysis using dosimetry to dose neutral structure confirmed the predictive power of cardiac dosimetry.

CONCLUSION : Cardiac dosimetry to subvolumes of the heart was associated with decreased OS in patients with early-stage NSCLC undergoing SABR. These data support the importance of minimizing the radiation dose to cardiac substructures. Further prioritizing the heart as an organ at risk might be warranted. Additionally, cardiac follow-up should be considered.

Chan Shawna T, Ruan Dan, Shaverdian Narek, Raghavan Govind, Cao Minsong, Lee Percy

2019-Oct-21

Cardiac substructure dosimetry, Deep learning, Early-stage lung cancer, Stereotactic body radiotherapy, Survivorship

General General

Excess brain age in the sleep electroencephalogram predicts reduced life expectancy.

In Neurobiology of aging ; h5-index 69.0

The brain age index (BAI) measures the difference between an individual's apparent "brain age" (BA; estimated by comparing EEG features during sleep from an individual with age norms), and their chronological age (CA); that is BAI = BA-CA. Here, we evaluate whether BAI predicts life expectancy. Brain age was quantified using a previously published machine learning algorithm for a cohort of participants ≥40 years old who underwent an overnight sleep electroencephalogram (EEG) as part of the Sleep Heart Health Study (n = 4877). Excess brain age (BAI >0) was associated with reduced life expectancy (adjusted hazard ratio: 1.12, [1.03, 1.21], p = 0.002). Life expectancy decreased by -0.81 [-1.44, -0.24] years per standard-deviation increase in BAI. Our findings show that BAI, a sleep EEG-based biomarker of the deviation of sleep microstructure from patterns normal for age, is an independent predictor of life expectancy.

Paixao Luis, Sikka Pooja, Sun Haoqi, Jain Aayushee, Hogan Jacob, Thomas Robert, Westover M Brandon

2019-Dec-23

Biomarker, Brain age, EEG, Life expectancy, Mortality, Sleep

Public Health Public Health

Predicting opioid misuse at the population level is different from identifying opioid misuse in individual patients.

In Preventive medicine ; h5-index 62.0

Tumin and Bhalla mentioned challenges associated with the use of population-based survey and machine learning (ML) results on adolescent opioid misuse to clinical settings. In a clinical setting, medical providers do know patient's identity. So, it is not surprising that drug misuse is rarely identified through patient's self-report especially if it involves illicit drug. Even though self-report is susceptible to bias, it is a valid and affordable tool to gather data on illicit drug use at the population level. Use of audio computer-assisted self-interviewing (ACASI) and computer-assisted personal interviewing (CAPI) in NSDUH provides the respondent with a highly private and confidential mode for responding to questions, which helps increase the level of honest reporting of illicit drug use and other sensitive behaviors. As acknowledged in the paper, opioid misuse should not be inferred at the individual level from our ML models. Such interpretations may lead to ecological fallacy. Predicting opioid misuse at the population level is different from identifying opioid misuse in individual patients. Nonetheless, we believe that coordinated multisectoral collaborations that leverage the expertise and resources of both public health and clinical sectors would offer a promising model for addressing the opioid crisis.

Seo Dong-Chul, Han Dae-Hee, Lee Shieun

2020-Feb

Adolescents, Machine learning, Opioid misuse

General General

Combining lexical and context features for automatic ontology extension.

In Journal of biomedical semantics ; h5-index 23.0

BACKGROUND : Ontologies are widely used across biology and biomedicine for the annotation of databases. Ontology development is often a manual, time-consuming, and expensive process. Automatic or semi-automatic identification of classes that can be added to an ontology can make ontology development more efficient.

RESULTS : We developed a method that uses machine learning and word embeddings to identify words and phrases that are used to refer to an ontology class in biomedical Europe PMC full-text articles. Once labels and synonyms of a class are known, we use machine learning to identify the super-classes of a class. For this purpose, we identify lexical term variants, use word embeddings to capture context information, and rely on automated reasoning over ontologies to generate features, and we use an artificial neural network as classifier. We demonstrate the utility of our approach in identifying terms that refer to diseases in the Human Disease Ontology and to distinguish between different types of diseases.

CONCLUSIONS : Our method is capable of discovering labels that refer to a class in an ontology but are not present in an ontology, and it can identify whether a class should be a subclass of some high-level ontology classes. Our approach can therefore be used for the semi-automatic extension and quality control of ontologies. The algorithm, corpora and evaluation datasets are available at https://github.com/bio-ontology-research-group/ontology-extension.

Althubaiti Sara, Kafkas Şenay, Abdelhakim Marwa, Hoehndorf Robert

2020-Jan-13

Disease ontology, Embeddings, Neural network

General General

Identification and transfer of spatial transcriptomics signatures for cancer diagnosis.

In Breast cancer research : BCR ; h5-index 0.0

BACKGROUND : Distinguishing ductal carcinoma in situ (DCIS) from invasive ductal carcinoma (IDC) regions in clinical biopsies constitutes a diagnostic challenge. Spatial transcriptomics (ST) is an in situ capturing method, which allows quantification and visualization of transcriptomes in individual tissue sections. In the past, studies have shown that breast cancer samples can be used to study their transcriptomes with spatial resolution in individual tissue sections. Previously, supervised machine learning methods were used in clinical studies to predict the clinical outcomes for cancer types.

METHODS : We used four publicly available ST breast cancer datasets from breast tissue sections annotated by pathologists as non-malignant, DCIS, or IDC. We trained and tested a machine learning method (support vector machine) based on the expert annotation as well as based on automatic selection of cell types by their transcriptome profiles.

RESULTS : We identified expression signatures for expert annotated regions (non-malignant, DCIS, and IDC) and build machine learning models. Classification results for 798 expression signature transcripts showed high coincidence with the expert pathologist annotation for DCIS (100%) and IDC (96%). Extending our analysis to include all 25,179 expressed transcripts resulted in an accuracy of 99% for DCIS and 98% for IDC. Further, classification based on an automatically identified expression signature covering all ST spots of tissue sections resulted in prediction accuracy of 95% for DCIS and 91% for IDC.

CONCLUSIONS : This concept study suggest that the ST signatures learned from expert selected breast cancer tissue sections can be used to identify breast cancer regions in whole tissue sections including regions not trained on. Furthermore, the identified expression signatures can classify cancer regions in tissue sections not used for training with high accuracy. Expert-generated but even automatically generated cancer signatures from ST data might be able to classify breast cancer regions and provide clinical decision support for pathologists in the future.

Yoosuf Niyaz, Navarro José Fernández, Salmén Fredrik, Ståhl Patrik L, Daub Carsten O

2020-Jan-13

Breast cancer, Cancer diagnosis, Expression signature, Machine learning, Spatial transcriptomics

Radiology Radiology

Gated recurrent unit-based heart sound analysis for heart failure screening.

In Biomedical engineering online ; h5-index 0.0

BACKGROUND : Heart failure (HF) is a type of cardiovascular disease caused by abnormal cardiac structure and function. Early screening of HF has important implication for treatment in a timely manner. Heart sound (HS) conveys relevant information related to HF; this study is therefore based on the analysis of HS signals. The objective is to develop an efficient tool to identify subjects of normal, HF with preserved ejection fraction and HF with reduced ejection fraction automatically.

METHODS : We proposed a novel HF screening framework based on gated recurrent unit (GRU) model in this study. The logistic regression-based hidden semi-Markov model was adopted to segment HS frames. Normalized frames were taken as the input of the proposed model which can automatically learn the deep features and complete the HF screening without de-nosing and hand-crafted feature extraction.

RESULTS : To evaluate the performance of proposed model, three methods are used for comparison. The results show that the GRU model gives a satisfactory performance with average accuracy of 98.82%, which is better than other comparison models.

CONCLUSION : The proposed GRU model can learn features from HS directly, which means it can be independent of expert knowledge. In addition, the good performance demonstrates the effectiveness of HS analysis for HF early screening.

Gao Shan, Zheng Yineng, Guo Xingming

2020-Jan-13

Deep learning, Gated recurrent unit, Heart failure screening, Heart sound

Pathology Pathology

Artificial intelligence in digital breast pathology: Techniques and applications.

In Breast (Edinburgh, Scotland) ; h5-index 0.0

Breast cancer is the most common cancer and second leading cause of cancer-related death worldwide. The mainstay of breast cancer workup is histopathological diagnosis - which guides therapy and prognosis. However, emerging knowledge about the complex nature of cancer and the availability of tailored therapies have exposed opportunities for improvements in diagnostic precision. In parallel, advances in artificial intelligence (AI) along with the growing digitization of pathology slides for the primary diagnosis are a promising approach to meet the demand for more accurate detection, classification and prediction of behaviour of breast tumours. In this article, we cover the current and prospective uses of AI in digital pathology for breast cancer, review the basics of digital pathology and AI, and outline outstanding challenges in the field.

Ibrahim Asmaa, Gamble Paul, Jaroensri Ronnachai, Abdelsamea Mohammed M, Mermel Craig H, Chen Po-Hsuan Cameron, Rakha Emad A

2019-Dec-19

(Artificial intelligence), (Deep learning), (Machine learning), (Whole slide image), AI, Applications, Breast cancer, Breast pathology, DL, Digital, ML, Pathology, WSI

General General

Design and evaluation of a context-aware model based on psychophysiology.

In Computer methods and programs in biomedicine ; h5-index 0.0

BACKGROUND AND OBJECTIVE : Psychotherapy is one of the most common pathways to help individuals address any mental disorders. However, the traditional method of assessing mental health has a margin for improvement. The recent advances in digital technology (e.g., smartphones and wearables) and machine learning techniques can support psychotherapy through the addition of psychophysiology. This paper presents RevitalMe, a context-aware model for assisting a psychotherapeutic understanding of human behavior, providing psychophysiological insights from real-life.

METHODS : Five volunteers used RevitalMe's prototype in natural environments for eight days each. Ecological Momentary Assessment was used to collect individuals' stressful states, and to label real-life data. The Wilcoxon Signed-Rank Test was performed to verify a significant difference between the labeled states. Then, RevitalMe classified psychological states based on physiological measurements through machine learning, associating them with the behavior of the individual. After that, visual insights were generated through contexts processing and presented to psychotherapists as evidence of an individual's daily behavior and psychological state. Twelve psychotherapists evaluated the clinical acceptability of RevitalMe, answering six quantitative statements and two qualitative questions. Furthermore, a t-Test was performed to investigate clinical acceptability given therapy field and clinical years.

RESULTS : The Wilcoxon Signed-Rank Test succeeds in proving that labeled states were statistically significant, and RevitalMe achieved an F1-Score of 75% in the binary classification of stressed states in natural environments. The evaluation showed clinical acceptability of 90%, composed by partial agreement of 62% and a total agreement of 28%. In this regard, the t-Test provided that the level of interest from cognitive-behavior therapists in psychophysiological insight was higher than that from psychodynamic therapists.

CONCLUSIONS : The psychophysiological insights approximate cognitive-behavior psychotherapy to individual's behavior and daily events, focusing on assistance in mental healthcare.

Bavaresco Rodrigo, Barbosa Jorge, Vianna Henrique, Büttenbender Paulo, Dias Lucas

2019-Dec-27

Context awareness, Machine learning, Psychophysiology, Psychotherapy, Ubiquitous computing

General General

PiNN: A Python Library for Building Atomic Neural Networks of Molecules and Materials.

In Journal of chemical information and modeling ; h5-index 0.0

Atomic neural networks (ANNs) constitute a class of machine learning methods for predicting potential energy surfaces and physico-chemical properties of molecules and materials. Despite many successes, developing interpretable ANN architectures and implementing existing ones efficiently are still challenging. This calls for reliable, general-purpose and open-source codes. Here, we present a python library named PiNN as a solution toward this goal. In PiNN, we designed a new interpretable and high-performing graph convolutional neural network variant, PiNet, as well as implemented the established Behler-Parrinello high-dimensional neural network. These implementations were tested using datasets of isolated small molecules, crystalline materials, liquid water and an aqueous alkaline electrolyte. PiNN comes with a visualizer called PiNNBoard to extract chemical insight ``learned'' by ANNs, provides analytical stress tensor calculations and interfaces to both the Atomic Simulation Environment and a development version of the Amsterdam Modeling Suite. Moreover, PiNN is highly modularized which makes it useful not only as a standalone package but also as a chain of tools to develop and to implement novel ANNs. The code is distributed under a permissive BSD license and is freely accessible at \href{https://github.com/Teoroo-CMC/PiNN/}{https://github.com/Teoroo-CMC/PiNN/} with full documentation and tutorials.

Shao Yunqi, Hellström Matti, Mitev Pavlin D, Knijff Lisanne, Zhang Chao

2020-Jan-14

Public Health Public Health

Environmental mixtures and children's health: identifying appropriate statistical approaches.

In Current opinion in pediatrics ; h5-index 0.0

PURPOSE OF REVIEW : Biomonitoring studies have shown that children are constantly exposed to complex patterns of chemical and nonchemical exposures. Here, we briefly summarize the rationale for studying multiple exposures, also called mixture, in relation to child health and key statistical approaches that can be used. We discuss advantages over traditional methods, limitations and appropriateness of the context.

RECENT FINDINGS : New approaches allow pediatric researchers to answer increasingly complex questions related to environmental mixtures. We present methods to identify the most relevant exposures among a high-multitude of variables, via shrinkage and variable selection techniques, and identify the overall mixture effect, via Weighted Quantile Sum and Bayesian Kernel Machine regressions. We then describe novel extensions that handle high-dimensional exposure data and allow identification of critical exposure windows.

SUMMARY : Recent advances in statistics and machine learning enable researchers to identify important mixture components, estimate joint mixture effects and pinpoint critical windows of exposure. Despite many advantages over single chemical approaches, measurement error and biases may be amplified in mixtures research, requiring careful study planning and design. Future research requires increased collaboration between epidemiologists, statisticians and data scientists, and further integration with causal inference methods.

Tanner Eva, Lee Alison, Colicino Elena

2020-Jan-11

Radiology Radiology

Prioritization of Cognitive Assessments in Alzheimer's Disease via Learning to Rank using Brain Morphometric Data.

In ... IEEE-EMBS International Conference on Biomedical and Health Informatics. IEEE-EMBS International Conference on Biomedical and Health Informatics ; h5-index 0.0

We propose an innovative machine learning paradigm enabling precision medicine for prioritizing cognitive assessments according to their relevance to Alzheimer's disease at the individual patient level. The paradigm tailors the cognitive biomarker discovery and cognitive assessment selection process to the brain morphometric characteristics of each individual patient. We implement this paradigm using a newly developed learning-to-rank method PLTR. Our empirical study on the ADNI data yields promising results to identify and prioritize individual-specific cognitive biomarkers as well as cognitive assessment tasks based on the individual's structural MRI data. The resulting top ranked cognitive biomarkers and assessment tasks have the potential to aid personalized diagnosis and disease subtyping.

Peng Bo, Yao Xiaohui, Risacher Shannon L, Saykin Andrew J, Shen Li, Ning Xia

2019-May

Surgery Surgery

Photoplethysmography based atrial fibrillation detection: a review.

In NPJ digital medicine ; h5-index 0.0

Atrial fibrillation (AF) is a cardiac rhythm disorder associated with increased morbidity and mortality. It is the leading risk factor for cardioembolic stroke and its early detection is crucial in both primary and secondary stroke prevention. Continuous monitoring of cardiac rhythm is today possible thanks to consumer-grade wearable devices, enabling transformative diagnostic and patient management tools. Such monitoring is possible using low-cost easy-to-implement optical sensors that today equip the majority of wearables. These sensors record blood volume variations-a technology known as photoplethysmography (PPG)-from which the heart rate and other physiological parameters can be extracted to inform about user activity, fitness, sleep, and health. Recently, new wearable devices were introduced as being capable of AF detection, evidenced by large prospective trials in some cases. Such devices would allow for early screening of AF and initiation of therapy to prevent stroke. This review is a summary of a body of work on AF detection using PPG. A thorough account of the signal processing, machine learning, and deep learning approaches used in these studies is presented, followed by a discussion of their limitations and challenges towards clinical applications.

Pereira Tania, Tran Nate, Gadhoumi Kais, Pelter Michele M, Do Duc H, Lee Randall J, Colorado Rene, Meisel Karl, Hu Xiao

2020

Diagnosis, Risk factors

General General

Deep neural networks for human microRNA precursor detection.

In BMC bioinformatics ; h5-index 0.0

BACKGROUND : MicroRNAs (miRNAs) play important roles in a variety of biological processes by regulating gene expression at the post-transcriptional level. So, the discovery of new miRNAs has become a popular task in biological research. Since the experimental identification of miRNAs is time-consuming, many computational tools have been developed to identify miRNA precursor (pre-miRNA). Most of these computation methods are based on traditional machine learning methods and their performance depends heavily on the selected features which are usually determined by domain experts. To develop easily implemented methods with better performance, we investigated different deep learning architectures for the pre-miRNAs identification.

RESULTS : In this work, we applied convolution neural networks (CNN) and recurrent neural networks (RNN) to predict human pre-miRNAs. We combined the sequences with the predicted secondary structures of pre-miRNAs as input features of our models, avoiding the feature extraction and selection process by hand. The models were easily trained on the training dataset with low generalization error, and therefore had satisfactory performance on the test dataset. The prediction results on the same benchmark dataset showed that our models outperformed or were highly comparable to other state-of-the-art methods in this area. Furthermore, our CNN model trained on human dataset had high prediction accuracy on data from other species.

CONCLUSIONS : Deep neural networks (DNN) could be utilized for the human pre-miRNAs detection with high performance. Complex features of RNA sequences could be automatically extracted by CNN and RNN, which were used for the pre-miRNAs prediction. Through proper regularization, our deep learning models, although trained on comparatively small dataset, had strong generalization ability.

Zheng Xueming, Fu Xingli, Wang Kaicheng, Wang Meng

2020-Jan-13

DNN, Detection, miRNAs

Surgery Surgery

The Impact of Artificial Intelligence on Quality and Safety.

In Global spine journal ; h5-index 0.0

As exponential expansion of computing capacity converges with unsustainable health care spending, a hopeful opportunity has emerged: the use of artificial intelligence to enhance health care quality and safety. These computer-based algorithms can perform the intricate and extremely complex mathematical operations of classification or regression on immense amounts of data to detect intricate and potentially previously unknown patterns in that data, with the end result of creating predictive models that can be utilized in clinical practice. Such mod