Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Category articles

Public Health Public Health

Occurrence, predictors and hazards of elevated groundwater arsenic across India through field observations and regional-scale AI-based modeling.

In The Science of the total environment

Existence of wide spread elevated concentrations of groundwater arsenic (As) across South Asia, including India, has endangered a huge groundwater-based drinking water dependent population. Here, using high-spatial resolution As field-observations (~3 million groundwater sources) across India, we have delineated the regional-scale occurrence of elevated groundwater As (≥10 μg/L), along with the possible geologic-geomorphologic-hydrologic and human-sourced predictors that influence the spatial distribution of the contaminant. Using statistical and machine learning method, we also modeled the groundwater As concentrations probability at 1 Km resolution, along with probabilistic delineation of high As-hazard zones across India. The observed occurrence of groundwater As was found to be most strongly influenced by geology-tectonics, groundwater-fed irrigated area (%) and elevation. Pervasive As contamination is observed in major parts of the Himalayan mega-river Indus-Ganges-Brahmaputra basins, however it also occurs in several more-localized pockets, mostly related to ancient tectonic zones, igneous provinces, aquifers in modern delta and chalcophile mineralized regions. The model results suggest As-hazard potential in yet-undetected areas. Our model performed well in predicting groundwater arsenic, with accuracy: 82% and 84%; area under the curve (AUC): 0.89 and 0.88 for test data and validation datasets. An estimated ~90 million people across India are found to be exposed to high groundwater As from field-observed data, with the five states with highest hazard are West Bengal (28 million), Bihar (21 million), Uttar Pradesh (15 million), Assam (8.6 million) and Punjab (6 million). However it can be much more if the modeled hazard is considered (>250 million). Thus, our study provides a detailed, quantitative assessment of high groundwater As across India, with delineation of possible intrinsic influences and exogenous forcings. The predictive model is helpful in predicting As-hazard zones in the areas with limited measurements.

Mukherjee Abhijit, Sarkar Soumyajit, Chakraborty Madhumita, Duttagupta Srimanti, Bhattacharya Animesh, Saha Dipankar, Bhattacharya Prosun, Mitra Adway, Gupta Saibal

2020-Nov-13

Arsenic, Groundwater contamination, India, Machine learning, Public health, Tectonics

Cardiology Cardiology

Neural collaborative filtering for unsupervised mitral valve segmentation in echocardiography.

In Artificial intelligence in medicine ; h5-index 34.0

The segmentation of the mitral valve annulus and leaflets specifies a crucial first step to establish a machine learning pipeline that can support physicians in performing multiple tasks, e.g. diagnosis of mitral valve diseases, surgical planning, and intraoperative procedures. Current methods for mitral valve segmentation on 2D echocardiography videos require extensive interaction with annotators and perform poorly on low-quality and noisy videos. We propose an automated and unsupervised method for the mitral valve segmentation based on a low dimensional embedding of the echocardiography videos using neural network collaborative filtering. The method is evaluated in a collection of echocardiography videos of patients with a variety of mitral valve diseases, and additionally on an independent test cohort. It outperforms state-of-the-art unsupervised and supervised methods on low-quality videos or in the case of sparse annotation.

Corinzia Luca, Laumer Fabian, Candreva Alessandro, Taramasso Maurizio, Maisano Francesco, Buhmann Joachim M

2020-Nov

Collaborative filtering, Mitral valve, Neural network, Segmentation

Surgery Surgery

Prediction of breast cancer distant recurrence using natural language processing and knowledge-guided convolutional neural network.

In Artificial intelligence in medicine ; h5-index 34.0

Distant recurrence of breast cancer results in high lifetime risks and low 5-year survival rates. Early prediction of distant recurrent breast cancer could facilitate intervention and improve patients' life quality. In this study, we designed an EHR-based predictive model to estimate the distant recurrent probability of breast cancer patients. We studied the pathology reports and progress notes of 6,447 patients who were diagnosed with breast cancer at Northwestern Memorial Hospital between 2001 and 2015. Clinical notes were mapped to Concept unified identifiers (CUI) using natural language processing tools. Bag-of-words and pre-trained embedding were employed to vectorize words and CUI sequences. These features integrated with clinical features from structured data were downstreamed to conventional machine learning classifiers and Knowledge-guided Convolutional Neural Network (K-CNN). The best configuration of our model yielded an AUC of 0.888 and an F1-score of 0.5. Our work provides an automated method to predict breast cancer distant recurrence using natural language processing and deep learning approaches. We expect that through advanced feature engineering, better predictive performance could be achieved.

Wang Hanyin, Li Yikuan, Khan Seema A, Luo Yuan

2020-Nov

Breast cancer, Distant recurrence, Entity embeddings, Knowledge-guided convolutional neural network, Word embeddings

General General

Autoencoded DNA methylation data to predict breast cancer recurrence: Machine learning models and gene-weight significance.

In Artificial intelligence in medicine ; h5-index 34.0

Breast cancer is the most frequent cancer in women and the second most frequent overall after lung cancer. Although the 5-year survival rate of breast cancer is relatively high, recurrence is also common which often involves metastasis with its consequent threat for patients. DNA methylation-derived databases have become an interesting primary source for supervised knowledge extraction regarding breast cancer. Unfortunately, the study of DNA methylation involves the processing of hundreds of thousands of features for every patient. DNA methylation is featured by High Dimension Low Sample Size which has shown well-known issues regarding feature selection and generation. Autoencoders (AEs) appear as a specific technique for conducting nonlinear feature fusion. Our main objective in this work is to design a procedure to summarize DNA methylation by taking advantage of AEs. Our proposal is able to generate new features from the values of CpG sites of patients with and without recurrence. Then, a limited set of relevant genes to characterize breast cancer recurrence is proposed by the application of survival analysis and a pondered ranking of genes according to the distribution of their CpG sites. To test our proposal we have selected a dataset from The Cancer Genome Atlas data portal and an AE with a single-hidden layer. The literature and enrichment analysis (based on genomic context and functional annotation) conducted regarding the genes obtained with our experiment confirmed that all of these genes were related to breast cancer recurrence.

Macías-García Laura, Martínez-Ballesteros María, Luna-Romera José María, García-Heredia José M, García-Gutiérrez Jorge, Riquelme-Santos José C

2020-Nov

Autoencoder, Breast cancer, DNA methylation, Feature generation, Machine learning

General General

Decoding working memory task condition using MEG source level long-range phase coupling patterns.

In Journal of neural engineering ; h5-index 52.0

OBJECTIVE : The objective of the study is to identify phase coupling patterns that are shared across subjects via a machine learning approach that utilises source space MEG phase coupling data from a Working Memory (WM) task. Indeed, phase coupling of neural oscillations is putatively a key factor for communication between distant brain areas and it is therefore crucial in performing cognitive tasks, including WM. Previous studies investigating phase coupling during cognitive tasks have often focused on a few a priori selected brain areas or a specific frequency band and the need for data-driven approaches has been recognised. Machine learning techniques have emerged as valuable tools for the analysis of neuroimaging data since they catch fine-grained differences in the multivariate signal distribution. Here, we expect that these techniques applied to MEG phase couplings can reveal WM related processes that are shared across individuals.

APPROACH : We analysed WM data collected as part of the Human Connectome Project. The MEG data were collected while subjects (N=83) performed N-back WM tasks in two different conditions, namely 2-back (WM condition) and 0-back (control condition). We estimated phase coupling patterns (Multivariate Phase Slope Index) for both conditions and for theta, alpha, beta, and gamma bands. The obtained phase coupling data were then used to train a linear support vector machine in order to classify which task condition the subject was performing with an across-subject cross-validation approach. The classification was performed separately based on the data from individual frequency bands and with all bands combined (multiband). Finally, we evaluated the relative importance of the different features (phase couplings) for the classification by the means of feature selection probability.

MAIN RESULTS : The WM condition and control condition were successfully classified based on the phase coupling patterns in theta (62 % accuracy) and alpha bands (60 % accuracy) separately. Importantly, the multiband classification showed that not only phase coupling patterns in theta and alpha but also in the gamma bands are related to WM processing as testified by improvement in classification performance (71 %).

SIGNIFICANCE : Our study successfully decoded working memory tasks using MEG source space functional connectivity. Our approach, combining across-subject classification and a multidimensional metric recently developed by our group, is able to detect patterns of connectivity that are shared across individuals. In other words the results are generalisable to new individuals and allow meaningful interpretation of the task relevant phase coupling patterns.

Syrjälä Jaakko Johannes, Basti Alessio, Guidotti Roberto, Marzetti Laura, Pizzella Vittorio

2020-Nov-30

machine learning, magnetoencephalography, neural oscillations, phase coupling, working memory

Pathology Pathology

The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis.

In Computers in biology and medicine

Recently, deep learning frameworks have rapidly become the main methodology for analyzing medical images. Due to their powerful learning ability and advantages in dealing with complex patterns, deep learning algorithms are ideal for image analysis challenges, particularly in the field of digital pathology. The variety of image analysis tasks in the context of deep learning includes classification (e.g., healthy vs. cancerous tissue), detection (e.g., lymphocytes and mitosis counting), and segmentation (e.g., nuclei and glands segmentation). The majority of recent machine learning methods in digital pathology have a pre- and/or post-processing stage which is integrated with a deep neural network. These stages, based on traditional image processing methods, are employed to make the subsequent classification, detection, or segmentation problem easier to solve. Several studies have shown how the integration of pre- and post-processing methods within a deep learning pipeline can further increase the model's performance when compared to the network by itself. The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input (pre-processing) or to improve the results of the network output (post-processing), focusing on digital pathology image analysis. Many of the techniques presented here, especially the post-processing methods, are not limited to digital pathology but can be extended to almost any image analysis field.

Salvi Massimo, Acharya U Rajendra, Molinari Filippo, Meiburger Kristen M

2020-Nov-21

Deep learning, Digital pathology, Histology, Image analysis, Post-processing, Pre-processing

General General

Development and validation of a real-time artificial intelligence-assisted system for detecting early gastric cancer: A multicentre retrospective diagnostic study.

In EBioMedicine

BACKGROUND : We aimed to develop and validate a real-time deep convolutional neural networks (DCNNs) system for detecting early gastric cancer (EGC).

METHODS : All 45,240 endoscopic images from 1364 patients were divided into a training dataset (35823 images from 1085 patients) and a validation dataset (9417 images from 279 patients). Another 1514 images from three other hospitals were used as external validation. We compared the diagnostic performance of the DCNN system with endoscopists, and then evaluated the performance of endoscopists with or without referring to the system. Thereafter, we evaluated the diagnostic ability of the DCNN system in video streams. The accuracy, sensitivity, specificity, positive predictive value, negative predictive value and Cohen's kappa coefficient were measured to assess the detection performance.

FINDING : The DCNN system showed good performance in EGC detection in validation datasets, with accuracy (85.1%-91.2%), sensitivity (85.9%-95.5%), specificity (81.7%-90.3%), and AUC (0.887-0.940). The DCNN system showed better diagnostic performance than endoscopists and improved the performance of endoscopists. The DCNN system was able to process oesophagogastroduodenoscopy (OGD) video streams to detect EGC lesions in real time.

INTERPRETATION : We developed a real-time DCNN system for EGC detection with high accuracy and stability. Multicentre prospective validation is needed to acquire high-level evidence for its clinical application.

FUNDING : This work was supported by the National Natural Science Foundation of China (grant nos. 81672935 and 81871947), Jiangsu Clinical Medical Center of Digestive System Diseases and Gastrointestinal Cancer (grant no. YXZXB2016002), and Nanjing Science and Technology Development Foundation (grant no. 2017sb332019).

Tang Dehua, Wang Lei, Ling Tingsheng, Lv Ying, Ni Muhan, Zhan Qiang, Fu Yiwei, Zhuang Duanming, Guo Huimin, Dou Xiaotan, Zhang Wei, Xu Guifang, Zou Xiaoping

2020-Nov-27

Artificial intelligence, Convolutional neural network, Detection, Early gastric cancer

General General

ENNAACT is a novel tool which employs neural networks for anticancer activity classification for therapeutic peptides.

In Biomedicine & pharmacotherapy = Biomedecine & pharmacotherapie

The prevalence of cancer as a threat to human life, responsible for 9.6 million deaths worldwide in 2018, motivates the search for new anticancer agents. While many options are currently available for treatment, these are often expensive and impact the human body unfavourably. Anticancer peptides represent a promising emerging field of anticancer therapeutics, which are characterized by favourable toxicity profile. The development of accurate in silico methods for anticancer peptide prediction is of paramount importance, as the amount of available sequence data is growing each year. This study leverages advances in machine learning research to produce a novel sequence-based deep neural network classifier for anticancer peptide activity. The classifier achieves performance comparable to the best-in-class, with a cross-validated accuracy of 98.3%, Matthews correlation coefficient of 0.91 and an Area Under the Curve of 0.95. This innovative classifier is available as a web server at https://research.timmons.eu/ennaact, facilitating in silico screening and design of new anticancer peptide chemotherapeutics by the research community.

Timmons Patrick Brendan, Hewage Chandralal M

2020-Nov-27

Anticancer drugs, In silico screening, Machine learning, Neural network, Peptides

General General

Clinical Features for Identifying the Possibility of Toileting Independence after Convalescent Inpatient Rehabilitation in Severe Stroke Patients: A Decision Tree Analysis Based on a Nationwide Japan Rehabilitation Database.

In Journal of stroke and cerebrovascular diseases : the official journal of National Stroke Association

BACKGROUND AND PURPOSE : In severe stroke patients, considerable concern should be given to toileting activity in rehabilitative support. Recently, the application of artificial intelligence, including machine learning (ML), has expanded into the stroke medical field, which could clarify the factors affecting toileting independence in severe stroke patients. This study aimed to identify the factors affecting toileting independence in severe stroke patients using ML.

METHODS : We used the Japan Rehabilitation Database from 2005 to 2015 to investigate data from 2292 severe stroke patients. We performed the chi-squared automatic interaction detection (CHAID) algorithm with various explanatory variables.

RESULTS : The CHAID model identified modified Rankin scale (mRS) score as the first discriminator. Among those with an mRS score ≤4, the next discriminator was age (score ≤72, 73-80, or >80). Among those with an mRS score > 4, the next discriminator was also age (score ≤57, 58-72, 73-80, or >80). Interestingly, some patients achieved toileting independence, although this study focused on severe stroke patients. In branches based on age, the percentage of the patients who achieved toileting independence at discharge decreased progressively with age.

CONCLUSION : We identified the influential factors, including reference values, for achieving toileting independence in convalescent severe stroke patients.

Imura Takeshi, Inoue Yu, Tanaka Ryo, Matsuba Junji, Umayahara Yasutaka

2020-Nov-27

Clinical feature, Decision tree, Severe stroke, Toileting ability

General General

The automation of bias in medical Artificial Intelligence (AI): Decoding the past to create a better future.

In Artificial intelligence in medicine ; h5-index 34.0

Medicine is at a disciplinary crossroads. With the rapid integration of Artificial Intelligence (AI) into the healthcare field the future care of our patients will depend on the decisions we make now. Demographic healthcare inequalities continue to persist worldwide and the impact of medical biases on different patient groups is still being uncovered by the research community. At a time when clinical AI systems are scaled up in response to the Covid19 pandemic, the role of AI in exacerbating health disparities must be critically reviewed. For AI to account for the past and build a better future, we must first unpack the present and create a new baseline on which to develop these tools. The means by which we move forwards will determine whether we project existing inequity into the future, or whether we reflect on what we hold to be true and challenge ourselves to be better. AI is an opportunity and a mirror for all disciplines to improve their impact on society and for medicine the stakes could not be higher.

Straw Isabel

2020-Nov

Artificial intelligence, Bias, Data science, Digital health, Disparities, Health, Healthcare, Inequality, Medicine

Radiology Radiology

BSR 2020 Annual Meeting: Program.

In Journal of the Belgian Society of Radiology

Different times call for different measures. The COVID-19 pandemic has forced us to search for alternative methods to provide an annual meeting which is equally interesting and has quality. For the Belgian Society of Radiology (BSR) 2020 Annual Meeting, the sections on Abdominal Imaging, Thoracic Imaging and the Young Radiologist Section (YRS) joined forces to organize a meeting which is quite different from the ones we have organised in the past. We have chosen to create a compact - approximately 5 hour - and entirely virtual meeting with the possibility of live interaction with the speakers during the question and answer sessions. The meeting kicks off with a message from the BSR president about radiology in 2020, followed by three abdominal talks. The second session combines an abdominal talk with COVID-related talks. We have chosen to include not only thoracic findings in COVID-19, but to take it further and discuss neurological patterns, long-term clinical findings and the progress in artificial intelligence in COVID-19. Lastly, the annual meeting closes off with a short movie about the (re)discovery of Röntgens X-ray, presented to us by the Belgian Museum for Radiology, Military Hospital, Brussels.

Vanhoenacker Anne-Sophie, Grandjean Flavien, Lieven Van Hoe, Snoeckx Annemie, Vanhoenacker Piet, Oyen Raymond

2020-Nov-13

2020, Annual Symposium, BSR

Surgery Surgery

Artificial intelligence image assisted knee ligament trauma repair efficacy analysis and postoperative femoral nerve block analgesia effect research.

In World neurosurgery ; h5-index 47.0

OBJECTIVE : this study was to analyse the effect of knee ligament injury repair assisted with the artificial intelligence (AI) images and the block analgesia effect of femoral nerve after surgery.

METHODS : the data-driven and AI methods were adopted to systematically study the magnetic resonance imaging (MRI) image reconstruction, image processing, and image analysis. Firstly, the knee ligament reconstruction and femoral arteriography images were studied. Using the prior knowledge that the maximum half of the full width of the contrast image does not change with the resolution, a constrained data exploration (CDE) algorithm was proposed combined with the iterative algorithm. The algorithm could reconstruct high-resolution images using the collected low-frequency data of k-space. The experimental data and results were simulated with the enhanced knee ligaments and femoral nerve angiography images. Combining the spatial continuity of knee ligaments and femoral nerve, a multi-layer input segmentation network was designed in this study. The multi-supervised network was adopted for output. It had good segmentation results for the knee ligaments and femoral nerve. On this basis, a multi-parameter image input speaker net was proposed to detect the knee ligament injuries.

RESULTS : the area under the receiver operating characteristic (ROC) curve of the constructed model under the test set was 0.824, and the sensitivity and specificity under the test set were 0.800 and 0.836, respectively. It proved that the image was better than compressed sensing to reconstruct the image, which was more accurate for knee ligament and femoral nerve stenosis. In addition, the network had higher sensitivity for knee joint trauma detection, which could give clinicians tips. The detection of postoperative femoral nerve block had a good detection effect, which could provide important information for clinical analgesia.

CONCLUSION : the AI image-assisted diagnosis system for analysis and process of multi-parameter magnetic resonance images was conductive for doctors make clinical decisions, reducing doctors' labour intensity, improving the work efficiency, and lowering the rate of misdiagnosis.

Hong Gang, Zhang Le, Kong Xiaochuan, Herbertl Lucien

2020-Nov-27

Analgesic effect analysis, Artificial intelligence image assistance, Femoral nerve block, Multiple ligament trauma repair

Dermatology Dermatology

Developing and Validating Methods to Assemble Systemic Lupus Erythematosus Births in the Electronic Health Record.

In Arthritis care & research ; h5-index 56.0

OBJECTIVE : Electronic health records (EHRs) represent powerful tools to study rare diseases. We developed and validated EHR algorithms to identify SLE births across centers.

METHODS : We developed algorithms in a training set using an EHR with over 3 million subjects and validated algorithms at two other centers. Subjects at all 3 centers were selected using ≥ 1 SLE ICD-9 or SLE ICD-10-CM codes and ≥ 1 ICD-9 or ICD-10-CM delivery code. A subject was a case if diagnosed with SLE by a rheumatologist and had a birth documented. We tested algorithms using SLE ICD-9 or ICD-10-CM codes, antimalarial use, a positive antinuclear antibody ≥ 1:160, and ever checked dsDNA or complements using both rule-based and machine learning methods. Positive predictive values (PPVs) and sensitivities were calculated. We assessed the impact of case definition, coding provider, and subject race on algorithm performance.

RESULTS : Algorithms performed similarly across all three centers. Increasing the number of SLE codes, adding clinical data, and having a rheumatologist use the SLE code all increased the likelihood of identifying true SLE patients. All the algorithms had higher PPVs in African American vs. Caucasian SLE births. Using machine learning methods, total number of SLE codes and a SLE code from a rheumatologist were the most important variables in the model for SLE case status.

CONCLUSION : We developed and validated algorithms that use multiple types of data to identify SLE births in the EHR. Algorithms performed better in African American mothers than Caucasian mothers.

Barnado April, Eudy Amanda M, Blaske Ashley, Wheless Lee, Kirchoff Katie, Oates Jim C, Clowse Megan E B

2020-Nov-30

birth, delivery, electronic health records, electronic phenotyping, pregnancy, systemic lupus erythematosus

General General

Tacrolimus exposure prediction using machine learning.

In Clinical pharmacology and therapeutics

The aim of this work is to estimate the area-under the blood concentration curve of tacrolimus following twice-a-day (BID) or once-a-day (QD) dosing in organ transplant patients, using Xgboost machine learning (ML) models. A total of 4997 and 1452 tacrolimus inter-dose AUCs from patients on BID and QD tacrolimus, sent to our ISBA expert system (www.pharmaco.chu-limoges.fr/) for AUC estimation and dose recommendation based on tacrolimus concentrations measured at least at 3 sampling times (predose, approx. 1 and 3h after dosing) were used to develop four ML models based on 2 or 3 concentrations. For each model, data splitting was performed to obtain a training set (75%) and a test set (25%). The Xgboost models in the training set with the lowest RMSE in a ten-fold cross-validation experiment were evaluated in the test set and in 6 independent full-pk datasets from renal, liver and heart transplant patients. ML models based on 2 or 3 concentrations, differences between these concentrations, relative deviations from theoretical times of sampling and 4 covariates (dose, type of transplantation, age and time between transplantation and sampling) yielded excellent AUC estimation performance in the test datasets (relative bias <5% and relative RMSE <10%) and better performance than MAP Bayesian estimation in the 6 independent full-pk datasets. The Xgboost ML models described allow accurate estimation of tacrolimus interdose AUC and can be used for routine tacrolimus exposure estimation and dose adjustment. They will soon be implemented in a dedicated web interface.

Woillard Jean-Baptiste, Labriffe Marc, Debord Jean, Marquet Pierre

2020-Nov-30

Xgboost, machine learning, tacrolimus, therapeutic drug monitoring

General General

Transcriptional insights into pathogenesis of cutaneous systemic sclerosis using pathway driven meta-analysis assisted by machine learning methods.

In PloS one ; h5-index 176.0

Pathophysiology of systemic sclerosis (SSc, Scleroderma), an autoimmune rheumatic disease, comprises of mechanisms that drive vasculopathy, inflammation and fibrosis. Understanding of the disease and associated clinical heterogeneity has advanced considerably in the past decade, highlighting the necessity of more specific targeted therapy. While many of the recent trials in SSc failed to meet the primary end points that predominantly relied on changes in modified Rodnan skin scores (MRSS), sub-group analysis, especially those focused on the basal skin transcriptomic data have provided insights into patient subsets that respond to therapies. These findings suggest that deeper understanding of the molecular changes in pathways is very important to define disease drivers in various patient subgroups. In view of these challenges, we performed meta-analysis on 9 public available SSc microarray studies using a novel pathway pivoted approach combining consensus clustering and machine learning assisted feature selection. Selected pathway modules were further explored through cluster specific topological network analysis in search of novel therapeutic concepts. In addition, we went beyond previously described SSc class divisions of 3 clusters (e.g. inflammation, fibro-proliferative, normal-like) and expanded into a much finer stratification in order to profile SSc patients more accurately. Our analysis unveiled an important 80 pathway signatures that differentiated SSc patients into 8 unique subtypes. The 5 pathway modules derived from such signature successfully defined the 8 SSc subsets and were validated by in-silico cellular deconvolution analysis. Myeloid cells and fibroblasts involvement in different clusters were confirmed and linked to corresponding pathway activities. Collectively, our findings revealed more complex disease subtypes in SSc; Key gene mediators such as IL6, FGFR1, TLR7, PLCG2, IRK2 identified by network analysis underscored the scientific rationale for exploring additional targets in treatment of SSc.

Xu Xiao, Ramanujam Meera, Visvanathan Sudha, Assassi Shervin, Liu Zheng, Li Li

2020

General General

Essential gene prediction using limited gene essentiality information-An integrative semi-supervised machine learning strategy.

In PloS one ; h5-index 176.0

Essential gene prediction helps to find minimal genes indispensable for the survival of any organism. Machine learning (ML) algorithms have been useful for the prediction of gene essentiality. However, currently available ML pipelines perform poorly for organisms with limited experimental data. The objective is the development of a new ML pipeline to help in the annotation of essential genes of less explored disease-causing organisms for which minimal experimental data is available. The proposed strategy combines unsupervised feature selection technique, dimension reduction using the Kamada-Kawai algorithm, and semi-supervised ML algorithm employing Laplacian Support Vector Machine (LapSVM) for prediction of essential and non-essential genes from genome-scale metabolic networks using very limited labeled dataset. A novel scoring technique, Semi-Supervised Model Selection Score, equivalent to area under the ROC curve (auROC), has been proposed for the selection of the best model when supervised performance metrics calculation is difficult due to lack of data. The unsupervised feature selection followed by dimension reduction helped to observe a distinct circular pattern in the clustering of essential and non-essential genes. LapSVM then created a curve that dissected this circle for the classification and prediction of essential genes with high accuracy (auROC > 0.85) even with 1% labeled data for model training. After successful validation of this ML pipeline on both Eukaryotes and Prokaryotes that show high accuracy even when the labeled dataset is very limited, this strategy is used for the prediction of essential genes of organisms with inadequate experimentally known data, such as Leishmania sp. Using a graph-based semi-supervised machine learning scheme, a novel integrative approach has been proposed for essential gene prediction that shows universality in application to both Prokaryotes and Eukaryotes with limited labeled data. The essential genes predicted using the pipeline provide an important lead for the prediction of gene essentiality and identification of novel therapeutic targets for antibiotic and vaccine development against disease-causing parasites.

Nandi Sutanu, Ganguli Piyali, Sarkar Ram Rup

2020

General General

A multi-scale cortical wiring space links cellular architecture and functional dynamics in the human brain.

In PLoS biology

The vast net of fibres within and underneath the cortex is optimised to support the convergence of different levels of brain organisation. Here, we propose a novel coordinate system of the human cortex based on an advanced model of its connectivity. Our approach is inspired by seminal, but so far largely neglected models of cortico-cortical wiring established by postmortem anatomical studies and capitalises on cutting-edge in vivo neuroimaging and machine learning. The new model expands the currently prevailing diffusion magnetic resonance imaging (MRI) tractography approach by incorporation of additional features of cortical microstructure and cortico-cortical proximity. Studying several datasets and different parcellation schemes, we could show that our coordinate system robustly recapitulates established sensory-limbic and anterior-posterior dimensions of brain organisation. A series of validation experiments showed that the new wiring space reflects cortical microcircuit features (including pyramidal neuron depth and glial expression) and allowed for competitive simulations of functional connectivity and dynamics based on resting-state functional magnetic resonance imaging (rs-fMRI) and human intracranial electroencephalography (EEG) coherence. Our results advance our understanding of how cell-specific neurobiological gradients produce a hierarchical cortical wiring scheme that is concordant with increasing functional sophistication of human brain organisation. Our evaluations demonstrate the cortical wiring space bridges across scales of neural organisation and can be easily translated to single individuals.

Paquola Casey, Seidlitz Jakob, Benkarim Oualid, Royer Jessica, Klimes Petr, Bethlehem Richard A I, Larivière Sara, Vos de Wael Reinder, Rodríguez-Cruces Raul, Hall Jeffery A, Frauscher Birgit, Smallwood Jonathan, Bernhardt Boris C

2020-Nov-30

General General

We need to keep a reproducible trace of facts, predictions, and hypotheses from gene to function in the era of big data.

In PLoS biology

How do we scale biological science to the demand of next generation biology and medicine to keep track of the facts, predictions, and hypotheses? These days, enormous amounts of DNA sequence and other omics data are generated. Since these data contain the blueprint for life, it is imperative that we interpret it accurately. The abundance of DNA is only one part of the challenge. Artificial Intelligence (AI) and network methods routinely build on large screens, single cell technologies, proteomics, and other modalities to infer or predict biological functions and phenotypes associated with proteins, pathways, and organisms. As a first step, how do we systematically trace the provenance of knowledge from experimental ground truth to gene function predictions and annotations? Here, we review the main challenges in tracking the evolution of biological knowledge and propose several specific solutions to provenance and computational tracing of evidence in functional linkage networks.

Kasif Simon, Roberts Richard J

2020-Nov-30

Public Health Public Health

Identifying longevity associated genes by integrating gene expression and curated annotations.

In PLoS computational biology

Aging is a complex process with poorly understood genetic mechanisms. Recent studies have sought to classify genes as pro-longevity or anti-longevity using a variety of machine learning algorithms. However, it is not clear which types of features are best for optimizing classification performance and which algorithms are best suited to this task. Further, performance assessments based on held-out test data are lacking. We systematically compare five popular classification algorithms using gene ontology and gene expression datasets as features to predict the pro-longevity versus anti-longevity status of genes for two model organisms (C. elegans and S. cerevisiae) using the GenAge database as ground truth. We find that elastic net penalized logistic regression performs particularly well at this task. Using elastic net, we make novel predictions of pro- and anti-longevity genes that are not currently in the GenAge database.

Townes F William, Carr Kareem, Miller Jeffrey W

2020-Nov-30

General General

Predicting Cognitive Declines Using Longitudinally Enriched Representations for Imaging Biomarkers.

In IEEE transactions on medical imaging ; h5-index 74.0

A critical challenge in using longitudinal neuroimaging data to study the progressions of Alzheimer's Disease (AD) is the varied number of missing records of the patients during the course when AD develops. To tackle this problem, in this paper we propose a novel formulation to learn an enriched representation with fixed length for imaging biomarkers, which aims to simultaneously capture the information conveyed by both baseline neuroimaging record and progressive variations characterized by varied counts of available follow-up records over time. Because the learned biomarker representations are a set of fixed-length vectors, they can be readily used by traditional machine learning models to study AD developments. Take into account that the missing brain scans are not aligned in terms of time in a studied cohort, we develop a new objective that maximizes the ratio of the summations of a number of ℓ1-norm distances for improved robustness, which, though, is difficult to efficiently solve in general. Thus, we derive a new efficient and non-greedy iterative solution algorithm and rigorously prove its convergence. We have performed extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort. A clear performance gain has been achieved in predicting ten different cognitive scores when we compare the original baseline biomarker representations against the learned representations with longitudinal enrichments. We further observe that the top selected biomarkers by our new method are in accordance with known knowledge in AD studies. These promising results have demonstrated improved performances of our new method that validate its effectiveness.

Lu Lyujian, Elbeleidy Saad, Baker Lauren Zoe, Wang Hua, Nie Feiping

2020-Nov-30

Surgery Surgery

Neural Network Model to Detect Long-Term Skin and Soft Tissue Infection after Hernia Repair.

In Surgical infections

Background: Skin and soft tissue infection (SSTI) after hernia surgery is infrequent yet catastrophic and is associated with mesh infection, interventions, and hernia recurrence. Although hernia repair is one of the most common general surgery procedures, uncertainty persists regarding incidence of long-term infections. Our goal is to develop a machine learning regression model that detects the occurrence of long-term hernia-associated SSTI. Patients and Methods: The data set consisted of veterans receiving hernia repair with implanted synthetic mesh during 2008-2015. The outcome of interest was occurrence of SSTI related to the index hernia surgery over a five-year follow-up. A neural network regression was fit on a medical record reviewed sample, then applied to the study population. Results: The study population was 96,435 surgeries, of which 76,886 (79.7%) were inguinal, 11,177 (11.6%) were umbilical, and 8,372 (8.7%) were ventral. In the training set, 40 patients had SSTI probability ≥90%, of whom 38 (95%) had a true SSTI. In 249 patients with SSTI probability <10%, only five (2%) patients had a true SSTI. In the testing set, nine patients were assigned a probability >90% and all were true-positives. In 100 patients with probability <10%, only two (2%) patients had a true infection. C-statistics were 0.929 in the training set and 0.901 in the testing set. Conclusions: The model showed excellent discrimination between those with and without infection and had good calibration. The model could be used to reduce the cost of detecting long-term infections.

O’Brien William J, Dipp Ramos Radwan, Gupta Kalpana, Itani Kamal M F

2020-Dec-01

hernia, infection surveillance, machine learning, surgical infection, surgical outcomes

General General

Predicting defibrillation success in out-of-hospital cardiac arrested patients: Moving beyond feature design.

In Artificial intelligence in medicine ; h5-index 34.0

OBJECTIVE : Optimizing timing of defibrillation by evaluating the likelihood of a successful outcome could significantly enhance resuscitation. Previous studies employed conventional machine learning approaches and hand-crafted features to address this issue, but none have achieved superior performance to be widely accepted. This study proposes a novel approach in which predictive features are automatically learned.

METHODS : A raw 4s VF episode immediately prior to first defibrillation shock was feed to a 3-stage CNN feature extractor. Each stage was composed of 4 components: convolution, rectified linear unit activation, dropout and max-pooling. At the end of feature extractor, the feature map was flattened and connected to a fully connected multi-layer perceptron for classification. For model evaluation, a 10 fold cross-validation was employed. To balance classes, SMOTE oversampling method has been applied to minority class.

RESULTS : The obtained results show that the proposed model is highly accurate in predicting defibrillation outcome (Acc = 93.6 %). Since recommendations on classifiers suggest at least 50 % specificity and 95 % sensitivity as safe and useful predictors for defibrillation decision, the reported sensitivity of 98.8 % and specificity of 88.2 %, with the analysis speed of 3 ms/input signal, indicate that the proposed model possesses a good prospective to be implemented in automated external defibrillators.

CONCLUSIONS : The learned features demonstrate superiority over hand-crafted ones when performed on the same dataset. This approach benefits from being fully automatic by fusing feature extraction, selection and classification into a single learning model. It provides a superior strategy that can be used as a tool to guide treatment of OHCA patients in bringing optimal decision of precedence treatment. Furthermore, for encouraging replicability, the dataset has been made publicly available to the research community.

Ivanović Marija D, Hannink Julius, Ring Matthias, Baronio Fabio, Vukčević Vladan, Hadžievski Ljupco, Eskofier Bjoern

2020-Nov

Convolutional neural networks (CNN), Deep learning, Defibrillation, Shock outcome, Ventricular fibrillation (VF)

General General

COVID-CheXNet: hybrid deep learning framework for identifying COVID-19 virus in chest X-rays images.

In Soft computing

The outbreaks of Coronavirus (COVID-19) epidemic have increased the pressure on healthcare and medical systems worldwide. The timely diagnosis of infected patients is a critical step to limit the spread of the COVID-19 epidemic. The chest radiography imaging has shown to be an effective screening technique in diagnosing the COVID-19 epidemic. To reduce the pressure on radiologists and control of the epidemic, fast and accurate a hybrid deep learning framework for diagnosing COVID-19 virus in chest X-ray images is developed and termed as the COVID-CheXNet system. First, the contrast of the X-ray image was enhanced and the noise level was reduced using the contrast-limited adaptive histogram equalization and Butterworth bandpass filter, respectively. This was followed by fusing the results obtained from two different pre-trained deep learning models based on the incorporation of a ResNet34 and high-resolution network model trained using a large-scale dataset. Herein, the parallel architecture was considered, which provides radiologists with a high degree of confidence to discriminate between the healthy and COVID-19 infected people. The proposed COVID-CheXNet system has managed to correctly and accurately diagnose the COVID-19 patients with a detection accuracy rate of 99.99%, sensitivity of 99.98%, specificity of 100%, precision of 100%, F1-score of 99.99%, MSE of 0.011%, and RMSE of 0.012% using the weighted sum rule at the score-level. The efficiency and usefulness of the proposed COVID-CheXNet system are established along with the possibility of using it in real clinical centers for fast diagnosis and treatment supplement, with less than 2 s per image to get the prediction result.

Al-Waisy Alaa S, Al-Fahdawi Shumoos, Mohammed Mazin Abed, Abdulkareem Karrar Hameed, Mostafa Salama A, Maashi Mashael S, Arif Muhammad, Garcia-Zapirain Begonya

2020-Nov-21

Chest X-ray images, Chest radiography imaging, Coronavirus COVID-19 epidemic, Deep learning, ResNet34 model, Transfer learning

General General

Automated measurement of hip-knee-ankle angle on the unilateral lower limb X-rays using deep learning.

In Physical and engineering sciences in medicine

Significant inherent extra-articular varus angulation is associated with abnormal postoperative hip-knee-ankle (HKA) angle. At present, HKA is manually measured by orthopedic surgeons and it increases the doctors' workload. To automatically determine HKA, a deep learning-based automated method for measuring HKA on the unilateral lower limb X-rays was developed and validated. This study retrospectively selected 398 double lower limbs X-rays during 2018 and 2020 from Jilin University Second Hospital. The images (n = 398) were cropped into unilateral lower limb images (n = 796). The deep neural network was used to segment the head of hip, the knee, and the ankle in the same image, respectively. Then, the mean square error of distance between each internal point of each organ and the organ's boundary was calculated. The point with the minimum mean square error was set as the central point of the organ. HKA was determined using the coordinates of three organs' central points according to the law of cosines. In a quantitative analysis, HKA was measured manually by three orthopedic surgeons with a high consistency (176.90 °  ± 12.18°, 176.95 °  ± 12.23°, 176.87 °  ± 12.25°) as evidenced by the Kandall's W of 0.999 (p < 0.001). Of note, the average measured HKA by them (176.90 °  ± 12.22°) served as the ground truth. The automatically measured HKA by the proposed method (176.41 °  ± 12.08°) was close to the ground truth, showing no significant difference. In addition, intraclass correlation coefficient (ICC) between them is 0.999 (p < 0.001). The average of difference between prediction and ground truth is 0.49°. The proposed method indicates a high feasibility and reliability in clinical practice.

Pei Yun, Yang Wenzhuo, Wei Shangqing, Cai Rui, Li Jialin, Guo Shuxu, Li Qiang, Wang Jincheng, Li Xueyan

2020-Nov-30

Angle measurement, Deep learning, HKA, X-ray

Radiology Radiology

Deep learning-based thigh muscle segmentation for reproducible fat fraction quantification using fat-water decomposition MRI.

In Insights into imaging

BACKGROUND : Time-efficient and accurate whole volume thigh muscle segmentation is a major challenge in moving from qualitative assessment of thigh muscle MRI to more quantitative methods. This study developed an automated whole thigh muscle segmentation method using deep learning for reproducible fat fraction quantification on fat-water decomposition MRI.

RESULTS : This study was performed using a public reference database (Dataset 1, 25 scans) and a local clinical dataset (Dataset 2, 21 scans). A U-net was trained using 23 scans (16 from Dataset 1, seven from Dataset 2) to automatically segment four functional muscle groups: quadriceps femoris, sartorius, gracilis and hamstring. The segmentation accuracy was evaluated on an independent testing set (3 × 3 repeated scans in Dataset 1 and four scans in Dataset 2). The average Dice coefficients between manual and automated segmentation were > 0.85. The average percent difference (absolute) in volume was 7.57%, and the average difference (absolute) in mean fat fraction (meanFF) was 0.17%. The reproducibility in meanFF was calculated using intraclass correlation coefficients (ICCs) for the repeated scans, and automated segmentation produced overall higher ICCs than manual segmentation (0.921 vs. 0.902). A preliminary quantitative analysis was performed using two-sample t test to detect possible differences in meanFF between 14 normal and 14 abnormal (with fat infiltration) thighs in Dataset 2 using automated segmentation, and significantly higher meanFF was detected in abnormal thighs.

CONCLUSIONS : This automated thigh muscle segmentation exhibits excellent accuracy and higher reproducibility in fat fraction estimation compared to manual segmentation, which can be further used for quantifying fat infiltration in thigh muscles.

Ding Jie, Cao Peng, Chang Hing-Chiu, Gao Yuan, Chan Sophelia Hoi Shan, Vardhanabhuti Varut

2020-Nov-30

Deep learning, Fat–water decomposition MRI, Quantitative MRI analysis, Thigh muscle segmentation

Radiology Radiology

Interventional radiology and artificial intelligence in radiology: Is it time to enhance the vision of our medical students?

In Insights into imaging

OBJECTIVES : To assess awareness and knowledge of Interventional Radiology (IR) in a large population of medical students in 2019.

METHODS : An anonymous survey was distributed electronically to 9546 medical students from first to sixth year at three European medical schools. The survey contained 14 questions, including two general questions on diagnostic radiology (DR) and artificial intelligence (AI), and 11 on IR. Responses were analyzed for all students and compared between preclinical (PCs) (first to third year) and clinical phase (Cs) (fourth to sixth year) of medical school. Of 9546 students, 1459 students (15.3%) answered the survey.

RESULTS : On DR questions, 34.8% answered that AI is a threat for radiologists (PCs: 246/725 (33.9%); Cs: 248/734 (36%)) and 91.1% thought that radiology has a future (PCs: 668/725 (92.1%); Cs: 657/734 (89.5%)). On IR questions, 80.8% (1179/1459) students had already heard of IR; 75.7% (1104/1459) stated that their knowledge of IR wasn't as good as the other specialties and 80% would like more lectures on IR. Finally, 24.2% (353/1459) indicated an interest in a career in IR with a majority of women in preclinical phase, but this trend reverses in clinical phase.

CONCLUSIONS : Development of new technology supporting advances in artificial intelligence will likely continue to change the landscape of radiology; however, medical students remain confident in the need for specialty-trained human physicians in the future of radiology as a clinical practice. A large majority of medical students would like more information about IR in their medical curriculum; almost a quarter of students would be interested in a career in IR.

Auloge Pierre, Garnon Julien, Robinson Joey Marie, Dbouk Sarah, Sibilia Jean, Braun Marc, Vanpee Dominique, Koch Guillaume, Cazzato Roberto Luigi, Gangi Afshin

2020-Nov-30

Artificial intelligence, Education, Female, Interventional radiology, Radiology

oncology Oncology

Evaluation of Deep Learning to Augment Image-Guided Radiotherapy for Head and Neck and Prostate Cancers.

In JAMA network open

Importance : Personalized radiotherapy planning depends on high-quality delineation of target tumors and surrounding organs at risk (OARs). This process puts additional time burdens on oncologists and introduces variability among both experts and institutions.

Objective : To explore clinically acceptable autocontouring solutions that can be integrated into existing workflows and used in different domains of radiotherapy.

Design, Setting, and Participants : This quality improvement study used a multicenter imaging data set comprising 519 pelvic and 242 head and neck computed tomography (CT) scans from 8 distinct clinical sites and patients diagnosed either with prostate or head and neck cancer. The scans were acquired as part of treatment dose planning from patients who received intensity-modulated radiation therapy between October 2013 and February 2020. Fifteen different OARs were manually annotated by expert readers and radiation oncologists. The models were trained on a subset of the data set to automatically delineate OARs and evaluated on both internal and external data sets. Data analysis was conducted October 2019 to September 2020.

Main Outcomes and Measures : The autocontouring solution was evaluated on external data sets, and its accuracy was quantified with volumetric agreement and surface distance measures. Models were benchmarked against expert annotations in an interobserver variability (IOV) study. Clinical utility was evaluated by measuring time spent on manual corrections and annotations from scratch.

Results : A total of 519 participants' (519 [100%] men; 390 [75%] aged 62-75 years) pelvic CT images and 242 participants' (184 [76%] men; 194 [80%] aged 50-73 years) head and neck CT images were included. The models achieved levels of clinical accuracy within the bounds of expert IOV for 13 of 15 structures (eg, left femur, κ = 0.982; brainstem, κ = 0.806) and performed consistently well across both external and internal data sets (eg, mean [SD] Dice score for left femur, internal vs external data sets: 98.52% [0.50] vs 98.04% [1.02]; P = .04). The correction time of autogenerated contours on 10 head and neck and 10 prostate scans was measured as a mean of 4.98 (95% CI, 4.44-5.52) min/scan and 3.40 (95% CI, 1.60-5.20) min/scan, respectively, to ensure clinically accepted accuracy, whereas contouring from scratch on the same scans was observed to be 73.25 (95% CI, 68.68-77.82) min/scan and 86.75 (95% CI, 75.21-92.29) min/scan, respectively, accounting for a 93% reduction in time.

Conclusions and Relevance : In this study, the models achieved levels of clinical accuracy within expert IOV while reducing manual contouring time and performing consistently well across previously unseen heterogeneous data sets. With the availability of open-source libraries and reliable performance, this creates significant opportunities for the transformation of radiation treatment planning.

Oktay Ozan, Nanavati Jay, Schwaighofer Anton, Carter David, Bristow Melissa, Tanno Ryutaro, Jena Rajesh, Barnett Gill, Noble David, Rimmer Yvonne, Glocker Ben, O’Hara Kenton, Bishop Christopher, Alvarez-Valle Javier, Nori Aditya

2020-Nov-02

Internal Medicine Internal Medicine

Machine Learning Electronic Health Record Identification of Patients with Rheumatoid Arthritis: Algorithm Pipeline Development and Validation Study.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Financial codes are often used to extract diagnoses from electronic health records. This approach is prone to false positives. Alternatively, queries are constructed, but these are highly center and language specific. A tantalizing alternative is the automatic identification of patients by employing machine learning on format-free text entries.

OBJECTIVE : The aim of this study was to develop an easily implementable workflow that builds a machine learning algorithm capable of accurately identifying patients with rheumatoid arthritis from format-free text fields in electronic health records.

METHODS : Two electronic health record data sets were employed: Leiden (n=3000) and Erlangen (n=4771). Using a portion of the Leiden data (n=2000), we compared 6 different machine learning methods and a naïve word-matching algorithm using 10-fold cross-validation. Performances were compared using the area under the receiver operating characteristic curve (AUROC) and the area under the precision recall curve (AUPRC), and F1 score was used as the primary criterion for selecting the best method to build a classifying algorithm. We selected the optimal threshold of positive predictive value for case identification based on the output of the best method in the training data. This validation workflow was subsequently applied to a portion of the Erlangen data (n=4293). For testing, the best performing methods were applied to remaining data (Leiden n=1000; Erlangen n=478) for an unbiased evaluation.

RESULTS : For the Leiden data set, the word-matching algorithm demonstrated mixed performance (AUROC 0.90; AUPRC 0.33; F1 score 0.55), and 4 methods significantly outperformed word-matching, with support vector machines performing best (AUROC 0.98; AUPRC 0.88; F1 score 0.83). Applying this support vector machine classifier to the test data resulted in a similarly high performance (F1 score 0.81; positive predictive value [PPV] 0.94), and with this method, we could identify 2873 patients with rheumatoid arthritis in less than 7 seconds out of the complete collection of 23,300 patients in the Leiden electronic health record system. For the Erlangen data set, gradient boosting performed best (AUROC 0.94; AUPRC 0.85; F1 score 0.82) in the training set, and applied to the test data, resulted once again in good results (F1 score 0.67; PPV 0.97).

CONCLUSIONS : We demonstrate that machine learning methods can extract the records of patients with rheumatoid arthritis from electronic health record data with high precision, allowing research on very large populations for limited costs. Our approach is language and center independent and could be applied to any type of diagnosis. We have developed our pipeline into a universally applicable and easy-to-implement workflow to equip centers with their own high-performing algorithm. This allows the creation of observational studies of unprecedented size covering different countries for low cost from already available data in electronic health record systems.

Maarseveen Tjardo D, Meinderink Timo, Reinders Marcel J T, Knitza Johannes, Huizinga Tom W J, Kleyer Arnd, Simon David, van den Akker Erik B, Knevel Rachel

2020-Nov-30

Electronic Health Records, Gradient Boosting, Natural Language Processing, Rheumatoid Arthritis, Supervised machine learning, Support Vector Machine

Public Health Public Health

Social Media as a Research Tool (SMaaRT) for Risky Behavior Analytics: Methodological Review.

In JMIR public health and surveillance

BACKGROUND : Modifiable risky health behaviors, such as tobacco use, excessive alcohol use, being overweight, lack of physical activity, and unhealthy eating habits, are some of the major factors for developing chronic health conditions. Social media platforms have become indispensable means of communication in the digital era. They provide an opportunity for individuals to express themselves, as well as share their health-related concerns with peers and health care providers, with respect to risky behaviors. Such peer interactions can be utilized as valuable data sources to better understand inter-and intrapersonal psychosocial mediators and the mechanisms of social influence that drive behavior change.

OBJECTIVE : The objective of this review is to summarize computational and quantitative techniques facilitating the analysis of data generated through peer interactions pertaining to risky health behaviors on social media platforms.

METHODS : We performed a systematic review of the literature in September 2020 by searching three databases-PubMed, Web of Science, and Scopus-using relevant keywords, such as "social media," "online health communities," "machine learning," "data mining," etc. The reporting of the studies was directed by the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Two reviewers independently assessed the eligibility of studies based on the inclusion and exclusion criteria. We extracted the required information from the selected studies.

RESULTS : The initial search returned a total of 1554 studies, and after careful analysis of titles, abstracts, and full texts, a total of 64 studies were included in this review. We extracted the following key characteristics from all of the studies: social media platform used for conducting the study, risky health behavior studied, the number of posts analyzed, study focus, key methodological functions and tools used for data analysis, evaluation metrics used, and summary of the key findings. The most commonly used social media platform was Twitter, followed by Facebook, QuitNet, and Reddit. The most commonly studied risky health behavior was nicotine use, followed by drug or substance abuse and alcohol use. Various supervised and unsupervised machine learning approaches were used for analyzing textual data generated from online peer interactions. Few studies utilized deep learning methods for analyzing textual data as well as image or video data. Social network analysis was also performed, as reported in some studies.

CONCLUSIONS : Our review consolidates the methodological underpinnings for analyzing risky health behaviors and has enhanced our understanding of how social media can be leveraged for nuanced behavioral modeling and representation. The knowledge gained from our review can serve as a foundational component for the development of persuasive health communication and effective behavior modification technologies aimed at the individual and population levels.

Singh Tavleen, Roberts Kirk, Cohen Trevor, Cobb Nathan, Wang Jing, Fujimoto Kayo, Myneni Sahiti

2020-Nov-30

data mining, infodemiology, infoveillance, machine learning, natural language processing, online health communities, risky health behaviors, social media, text mining

General General

Target-Specific Drug Design Method Combining Deep Learning and Water Pharmacophore.

In Journal of chemical information and modeling

Following identification of a target protein, hit identification, which finds small organic molecules that bind to the target, is an important first step of a structure-based drug design project. In this study, we demonstrate a target-specific drug design method that can autonomously generate a series of target-favorable compounds. This method utilizes the seq2seq model based on a deep learning algorithm and a water pharmacophore. Water pharmacophore models are used to screen compounds that are favorable to a given target in a large compound database, and seq2seq compound generators are used to train the screened compounds and generate entirely new compounds based on the training model. Our method was tested through binding energy calculation studies of six pharmaceutically relevant targets in the directory of useful decoys (DUD) set with docking. The compounds generated by our method had lower average binding energies than decoy compounds in five out of six cases and included a number of compounds that had lower binding energies than the average binding energies of the active compounds in four cases. The generated compound lists for these four cases featured compounds with lower binding energies than even the most active compounds.

Kim Minsup, Park Kichul, Kim Wonsang, Jung Sangwon, Cho Art E

2020-Nov-30

Pathology Pathology

in situ classification of cell types in human kidney tissue using 3D nuclear staining.

In Cytometry. Part A : the journal of the International Society for Analytical Cytology

To understand the physiology and pathology of disease, capturing the heterogeneity of cell types within their tissue environment is fundamental. In such an endeavor, the human kidney presents a formidable challenge because its complex organizational structure is tightly linked to key physiological functions. Advances in imaging-based cell classification may be limited by the need to incorporate specific markers that can link classification to function. Multiplex imaging can mitigate these limitations, but requires cumulative incorporation of markers, which may lead to tissue exhaustion. Furthermore, the application of such strategies in large scale 3-dimensional (3D) imaging is challenging. Here, we propose that 3D nuclear signatures from a DNA stain, DAPI, which could be incorporated in most experimental imaging, can be used for classifying cells in intact human kidney tissue. We developed an unsupervised approach that uses 3D tissue cytometry to generate a large training dataset of nuclei images (NephNuc), where each nucleus is associated with a cell type label. We then devised various supervised machine learning approaches for kidney cell classification and demonstrated that a deep learning approach outperforms classical machine learning or shape-based classifiers. Specifically, a custom 3D convolutional neural network (NephNet3D) trained on nuclei image volumes achieved a balanced accuracy of 80.26%. Importantly, integrating NephNet3D classification with tissue cytometry allowed in situ visualization of cell type classifications in kidney tissue. In conclusion, we present a tissue cytometry and deep learning approach for in situ classification of cell types in human kidney tissue using only a DNA stain. This methodology is generalizable to other tissues and has potential advantages on tissue economy and non-exhaustive classification of different cell types. This article is protected by copyright. All rights reserved.

Woloshuk Andre, Khochare Suraj, Almulhim Aljohara Fahad, McNutt Andrew T, Dean Dawson, Barwinska Daria, Ferkowicz Michael J, Eadon Michael T, Kelly Katherine J, Dunn Kenneth W, Hasan Mohammad A, El-Achkar Tarek M, Winfree Seth

2020-Nov-30

Deep Learning, Human Kidney, Tissue Cytometry, in situ classification

Pathology Pathology

Deep Learning-Based Spermatogenic Staging Assessment for Hematoxylin and Eosin-Stained Sections of Rat Testes.

In Toxicologic pathology

In preclinical toxicology studies, a "stage-aware" histopathological evaluation of testes is recognized as the most sensitive method to detect effects on spermatogenesis. A stage-aware evaluation requires the pathologist to be able to identify the different stages of the spermatogenic cycle. Classically, this evaluation has been performed using periodic acid-Schiff (PAS)-stained sections to visualize the morphology of the developing spermatid acrosome, but due to the complexity of the rat spermatogenic cycle and the subtlety of the criteria used to distinguish between the 14 stages of the cycle, staging of tubules is not only time consuming but also requires specialized training and practice to become competent. Using different criteria, based largely on the shape and movement of the elongating spermatids within the tubule and pooling some of the stages, it is possible to stage tubules using routine hematoxylin and eosin (H&E)-stained sections, thereby negating the need for a special PAS stain. These criteria have been used to develop an automated method to identify the stages of the rat spermatogenic cycle in digital images of H&E-stained Wistar rat testes. The algorithm identifies the spermatogenic stage of each tubule, thereby allowing the pathologist to quickly evaluate the testis in a stage-aware manner and rapidly calculate the stage frequencies.

Creasy Dianne M, Panchal Satish T, Garg Rohit, Samanta Pranab

2020-Nov-28

automation, deep learning, digital pathology, machine learning, rat, spermatogenesis, staging, testes

Public Health Public Health

Repurposing of FDA approved drugs against Salmonella enteric serovar Typhi by targeting dihydrofolate reductase: an in silico study.

In Journal of biomolecular structure & dynamics

Drug-resistant Salmonella enteric serovar Typhi (S. Typhi) poses a vital public health issue. To overcome drug resistance issues, the development of effective drugs with novel mechanism(s) of action is required. In this regard, drug repurposing is a viable alternative approach to find novel drugs to overcome drug resistance. Therefore, a FDA-approved-drug-library containing 1930 drugs was analyzed against the dihydrofolate reductase (DHFR) of S. Typhi using deep learning regression algorithms. Initially, a total of 500 compounds were screened, followed by rescreening by molecular docking. Further, from screened compounds by molecular docking, the top eight compounds were subjected to molecular dynamics (MD) simulation. Analysis of MD simulation resulted in four potential compounds, namely; Duvelisib, Amenamevir, Lifitegrast and Nilotinib against the DHFR enzyme. During the MD simulation, these four drugs achieved good stability during the 100 ns trajectory period at 300 K. Further to know the insights of the complex's stability, we calculated RMSF, RG, SASA and interaction energy for the last 60 ns trajectory period because all complexes showed the stability after 40 ns trajectory period. MM-PBSA analysis of the last 10 ns of MD trajectories showed the stability of the complexes. From our results, we conclude that these drugs can also be useful for treating typhoid fever and can inhibit S. Typhi by interfering with the function of the DHFR enzyme. Communicated by Ramaswamy H. Sarma.

Joshi Tushar, Sharma Priyanka, Joshi Tanuja, Mathpal Shalini, Pande Veena, Chandra Subhash

2020-Nov-30

** Salmonella enteric serovar Typhi, FDA approved drug, deep learning, dihydrofolate reductase, drug repurposing, molecular dynamic simulation**

General General

Deep learning with attention supervision for automated motion artefact detection in quality control of cardiac T1-mapping.

In Artificial intelligence in medicine ; h5-index 34.0

Cardiac magnetic resonance quantitative T1-mapping is increasingly used for advanced myocardial tissue characterisation. However, cardiac or respiratory motion can significantly affect the diagnostic utility of T1-maps, and thus motion artefact detection is critical for quality control and clinically-robust T1 measurements. Manual quality control of T1-maps may provide reassurance, but is laborious and prone to error. We present a deep learning approach with attention supervision for automated motion artefact detection in quality control of cardiac T1-mapping. Firstly, we customised a multi-stream Convolutional Neural Network (CNN) image classifier to streamline the process of automatic motion artefact detection. Secondly, we imposed attention supervision to guide the CNN to focus on targeted myocardial segments. Thirdly, when there was disagreement between the human operator and machine, a second human validator reviewed and rescored the cases for adjudication and to identify the source of disagreement. The multi-stream neural networks demonstrated 89.8% agreement, 87.4% ROC-AUC on motion artefact detection with the human operator in the 2568 T1 maps. Trained with additional supervision on attention, agreements and AUC significantly improved to 91.5% and 89.1%, respectively (p < 0.001). Rescoring of disagreed cases by the second human validator revealed that human operator error was the primary cause of disagreement. Deep learning with attention supervision provides a quick and high-quality assurance of clinical images, and outperforms human operators.

Zhang Qiang, Hann Evan, Werys Konrad, Wu Cody, Popescu Iulia, Lukaschuk Elena, Barutcu Ahmet, Ferreira Vanessa M, Piechnik Stefan K

2020-Nov

Attention Mapping, Attention Supervision, Cardiac Magnetic Resonance, Convolutional Neural Network, Quality Control, T1-mapping

Radiology Radiology

CoroDet: A deep learning based classification for COVID-19 detection using chest X-ray images.

In Chaos, solitons, and fractals

Background and Objective : The Coronavirus 2019, or shortly COVID-19, is a viral disease that causes serious pneumonia and impacts our different body parts from mild to severe depending on patient's immune system. This infection was first reported in Wuhan city of China in December 2019, and afterward, it became a global pandemic spreading rapidly around the world. As the virus spreads through human to human contact, it has affected our lives in a devastating way, including the vigorous pressure on the public health system, the world economy, education sector, workplaces, and shopping malls. Preventing viral spreading requires early detection of positive cases and to treat infected patients as quickly as possible. The need for COVID-19 testing kits has increased, and many of the developing countries in the world are facing a shortage of testing kits as new cases are increasing day by day. In this situation, the recent research using radiology imaging (such as X-ray and CT scan) techniques can be proven helpful to detect COVID-19 as X-ray and CT scan images provide important information about the disease caused by COVID-19 virus. The latest data mining and machine learning techniques such as Convolutional Neural Network (CNN) can be applied along with X-ray and CT scan images of the lungs for the accurate and rapid detection of the disease, assisting in mitigating the problem of scarcity of testing kits.

Methods : Hence a novel CNN model called CoroDet for automatic detection of COVID-19 by using raw chest X-ray and CT scan images have been proposed in this study. CoroDet is developed to serve as an accurate diagnostics for 2 class classification (COVID and Normal), 3 class classification (COVID, Normal, and non-COVID pneumonia), and 4 class classification (COVID, Normal, non-COVID viral pneumonia, and non-COVID bacterial pneumonia).

Results : The performance of our proposed model was compared with ten existing techniques for COVID detection in terms of accuracy. A classification accuracy of 99.1% for 2 class classification, 94.2% for 3 class classification, and 91.2% for 4 class classification was produced by our proposed model, which is obviously better than the state-of-the-art-methods used for COVID-19 detection to the best of our knowledge. Moreover, the dataset with x-ray images that we prepared for the evaluation of our method is the largest datasets for COVID detection as far as our knowledge goes.

Conclusion : The experimental results of our proposed method CoroDet indicate the superiority of CoroDet over the existing state-of-the-art-methods. CoroDet may assist clinicians in making appropriate decisions for COVID-19 detection and may also mitigate the problem of scarcity of testing kits.

Hussain Emtiaz, Hasan Mahmudul, Rahman Md Anisur, Lee Ickjai, Tamanna Tasmi, Parvez Mohammad Zavid

2020-Nov-23

Accuracy, COVID-19, Confusion matrix, Convolutional neural network, Deep learning, Pneumonia-bacterial, Pneumonia-viral, X-ray

General General

Algorithmic Prediction of Restraint and Seclusion in an Inpatient Child and Adolescent Psychiatric Population.

In Journal of the American Psychiatric Nurses Association

BACKGROUND : Restraint and seclusion in an inpatient child and adolescent psychiatric population adversely affects the overall value and safety of care. Due to adverse events, negative outcomes, and associated costs, inpatient psychiatric hospitals must strive to reduce and ultimately eliminate restraint and seclusion with innovative, data-driven approaches.

AIM : To identify patterns of client characteristics that are associated with restraint and seclusion in an inpatient child and adolescent psychiatric population.

METHOD : A machine learning application of fast-and-frugal tree modeling was used to analyze the sample.

RESULTS : The need for restraint and seclusion were correctly predicted for 73% of clients at risk (sensitivity), and 76% of clients were correctly predicted as negative or low risk (specificity), for needing restraint and seclusion based on the following characteristics: having a disruptive mood dysregulation disorder and/or attention-deficit hyperactivity disorder diagnosis, being 12 years old or younger, and not having a depressive and/or bipolar disorder diagnosis.

CONCLUSION : The client characteristics identified in the predictive algorithm should be reviewed on admission to recognize clients at risk for restraint and seclusion. For those at risk, interventions should be developed into an individualized client treatment plan to facilitate a proactive approach in preventing behavioral emergencies requiring restraint and seclusion.

Magnowski Stefani R, Kick Dalton, Cook Jessica, Kay Brian

2020-Nov-30

algorithm, child and adolescent, fast-and-frugal tree, predictive analytics, restraint and seclusion

General General

Systematic Identification of Molecular Targets and Pathways Related to Human Organ Level Toxicity.

In Chemical research in toxicology ; h5-index 45.0

The mechanisms leading to organ level toxicities are poorly understood. In this study, we applied an integrated approach to deduce the molecular targets and biological pathways involved in chemically induced toxicity for eight common human organ level toxicity end points (carcinogenicity, cardiotoxicity, developmental toxicity, hepatotoxicity, nephrotoxicity, neurotoxicity, reproductive toxicity, and skin toxicity). Integrated analysis of in vitro assay data, molecular targets and pathway annotations from the literature, and toxicity-molecular target associations derived from text mining, combined with machine learning techniques, were used to generate molecular targets for each of the organ level toxicity end points. A total of 1516 toxicity-related genes were identified and subsequently analyzed for biological pathway coverage, resulting in 206 significant pathways (p-value <0.05), ranging from 3 (e.g., developmental toxicity) to 101 (e.g., skin toxicity) for each toxicity end point. This study presents a systematic and comprehensive analysis of molecular targets and pathways related to various in vivo toxicity end points. These molecular targets and pathways could aid in understanding the biological mechanisms of toxicity and serve as a guide for the design of suitable in vitro assays for more efficient toxicity testing. In addition, these results are complementary to the existing adverse outcome pathway (AOP) framework and can be used to aid in the development of novel AOPs. Our results provide abundant testable hypotheses for further experimental validation.

Xu Tuan, Wu Leihong, Xia Menghang, Simeonov Anton, Huang Ruili

2020-Nov-29

General General

Rapid tissue oxygenation mapping from snapshot structured-light images with adversarial deep learning.

In Journal of biomedical optics

SIGNIFICANCE : Spatial frequency-domain imaging (SFDI) is a powerful technique for mapping tissue oxygen saturation over a wide field of view. However, current SFDI methods either require a sequence of several images with different illumination patterns or, in the case of single-snapshot optical properties (SSOP), introduce artifacts and sacrifice accuracy.

AIM : We introduce OxyGAN, a data-driven, content-aware method to estimate tissue oxygenation directly from single structured-light images.

APPROACH : OxyGAN is an end-to-end approach that uses supervised generative adversarial networks. Conventional SFDI is used to obtain ground truth tissue oxygenation maps for ex vivo human esophagi, in vivo hands and feet, and an in vivo pig colon sample under 659- and 851-nm sinusoidal illumination. We benchmark OxyGAN by comparing it with SSOP and a two-step hybrid technique that uses a previously developed deep learning model to predict optical properties followed by a physical model to calculate tissue oxygenation.

RESULTS : When tested on human feet, cross-validated OxyGAN maps tissue oxygenation with an accuracy of 96.5%. When applied to sample types not included in the training set, such as human hands and pig colon, OxyGAN achieves a 93% accuracy, demonstrating robustness to various tissue types. On average, OxyGAN outperforms SSOP and a hybrid model in estimating tissue oxygenation by 24.9% and 24.7%, respectively. Finally, we optimize OxyGAN inference so that oxygenation maps are computed ∼10 times faster than previous work, enabling video-rate, 25-Hz imaging.

CONCLUSIONS : Due to its rapid acquisition and processing speed, OxyGAN has the potential to enable real-time, high-fidelity tissue oxygenation mapping that may be useful for many clinical applications.

Chen Mason T, Durr Nicholas J

2020-Nov

machine learning, optical property, spatial frequency-domain imaging

Radiology Radiology

A 3D Convolutional Encapsulated Long Short-Term Memory (3DConv-LSTM) Model for Denoising fMRI Data.

In Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention

Function magnetic resonance imaging (fMRI) data are typically contaminated by noise introduced by head motion, physiological noise, and thermal noise. To mitigate noise artifact in fMRI data, a variety of denoising methods have been developed by removing noise factors derived from the whole time series of fMRI data and therefore are not applicable to real-time fMRI data analysis. In the present study, we develop a generally applicable, deep learning based fMRI denoising method to generate noise-free realistic individual fMRI volumes (time points). Particularly, we develop a fully data-driven 3D convolutional encapsulated Long Short-Term Memory (3DConv-LSTM) approach to generate noise-free fMRI volumes regularized by an adversarial network that makes the generated fMRI volumes more realistic by fooling a critic network. The 3DConv-LSTM model also integrates a gate-controlled self-attention model to memorize short-term dependency and historical information within a memory pool. We have evaluated our method based on both task and resting state fMRI data. Both qualitative and quantitative results have demonstrated that the proposed method outperformed state-of-the-art alternative deep learning methods.

Zhao Chongyue, Li Hongming, Jiao Zhicheng, Du Tianming, Fan Yong

2020-Oct

3D convolutional LSTM, Adversarial regularizer, Gate-controlled self-attention, fMRI denoising

General General

Deep Learning with Gaussian Differential Privacy.

In Harvard data science review

Deep learning models are often trained on datasets that contain sensitive information such as individuals' shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or complicated privacy analyses of training neural networks. In this paper, we consider a recently proposed privacy definition termed f-differential privacy [18] for a refined privacy analysis of training neural networks. Leveraging the appealing properties of f-differential privacy in handling composition and subsampling, this paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks, without the need of developing sophisticated techniques as [3] did. Our results demonstrate that the f-differential privacy framework allows for a new privacy analysis that improves on the prior analysis [3], which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget. These theoretically derived improvements are confirmed by our experiments in a range of tasks in image classification, text classification, and recommender systems. Python code to calculate the privacy cost for these experiments is publicly available in the TensorFlow Privacy library.

Bu Zhiqi, Dong Jinshuo, Long Qi, Su Weijie J

2020

Radiology Radiology

BSR 2020 Annual Meeting: Program.

In Journal of the Belgian Society of Radiology

Different times call for different measures. The COVID-19 pandemic has forced us to search for alternative methods to provide an annual meeting which is equally interesting and has quality. For the Belgian Society of Radiology (BSR) 2020 Annual Meeting, the sections on Abdominal Imaging, Thoracic Imaging and the Young Radiologist Section (YRS) joined forces to organize a meeting which is quite different from the ones we have organised in the past. We have chosen to create a compact - approximately 5 hour - and entirely virtual meeting with the possibility of live interaction with the speakers during the question and answer sessions. The meeting kicks off with a message from the BSR president about radiology in 2020, followed by three abdominal talks. The second session combines an abdominal talk with COVID-related talks. We have chosen to include not only thoracic findings in COVID-19, but to take it further and discuss neurological patterns, long-term clinical findings and the progress in artificial intelligence in COVID-19. Lastly, the annual meeting closes off with a short movie about the (re)discovery of Röntgens X-ray, presented to us by the Belgian Museum for Radiology, Military Hospital, Brussels.

Vanhoenacker Anne-Sophie, Grandjean Flavien, Lieven Van Hoe, Snoeckx Annemie, Vanhoenacker Piet, Oyen Raymond

2020-Nov-13

2020, Annual Symposium, BSR

General General

Evaluation of Speech-Based Digital Biomarkers: Review and Recommendations.

In Digital biomarkers

Speech represents a promising novel biomarker by providing a window into brain health, as shown by its disruption in various neurological and psychiatric diseases. As with many novel digital biomarkers, however, rigorous evaluation is currently lacking and is required for these measures to be used effectively and safely. This paper outlines and provides examples from the literature of evaluation steps for speech-based digital biomarkers, based on the recent V3 framework (Goldsack et al., 2020). The V3 framework describes 3 components of evaluation for digital biomarkers: verification, analytical validation, and clinical validation. Verification includes assessing the quality of speech recordings and comparing the effects of hardware and recording conditions on the integrity of the recordings. Analytical validation includes checking the accuracy and reliability of data processing and computed measures, including understanding test-retest reliability, demographic variability, and comparing measures to reference standards. Clinical validity involves verifying the correspondence of a measure to clinical outcomes which can include diagnosis, disease progression, or response to treatment. For each of these sections, we provide recommendations for the types of evaluation necessary for speech-based biomarkers and review published examples. The examples in this paper focus on speech-based biomarkers, but they can be used as a template for digital biomarker development more generally.

Robin Jessica, Harrison John E, Kaufman Liam D, Rudzicz Frank, Simpson William, Yancheva Maria

Dementia, Digital biomarkers, Digital health, Language, Speech, Validation

General General

Computational Approaches to Identify Molecules Binding to Mycobacterium tuberculosis KasA.

In ACS omega

Tuberculosis is caused by Mycobacterium tuberculosis (Mtb) and is a deadly disease resulting in the deaths of approximately 1.5 million people with 10 million infections reported in 2018. Recently, a key condensation step in the synthesis of mycolic acids was shown to require β-ketoacyl-ACP synthase (KasA). A crystal structure of KasA with the small molecule DG167 was recently described, which provided a starting point for using computational structure-based approaches to identify additional molecules binding to this protein. We now describe structure-based pharmacophores, docking and machine learning studies with Assay Central as a computational tool for the identification of small molecules targeting KasA. We then tested these compounds using nanoscale differential scanning fluorimetry and microscale thermophoresis. Of note, we identified several molecules including the Food and Drug Administration (FDA)-approved drugs sildenafil and flubendazole with Kd values between 30-40 μM. This may provide additional starting points for further optimization.

Puhl Ana C, Lane Thomas R, Vignaux Patricia A, Zorn Kimberley M, Capodagli Glenn C, Neiditch Matthew B, Freundlich Joel S, Ekins Sean

2020-Nov-24

General General

Toward Developing Intuitive Rules for Protein Variant Effect Prediction Using Deep Mutational Scanning Data.

In ACS omega

Protein structure and function can be severely altered by even a single amino acid mutation. Predictions of mutational effects using extensive artificial intelligence (AI)-based models, although accurate, remain as enigmatic as the experimental observations in terms of improving intuitions about the contributions of various factors. Inspired by Lipinski's rules for drug-likeness, we devise simple thresholding criteria on five different descriptors such as conservation, which have so far been limited to qualitative interpretations such as high conservation implies high mutational effect. We analyze systematic deep mutational scanning data of all possible single amino acid substitutions on seven proteins (25153 mutations) to first define these thresholds and then to evaluate the scope and limits of the predictions. At this stage, the approach allows us to comment easily and with a low error rate on the subset of mutations classified as neutral or deleterious by all of the descriptors. We hope that complementary to the accurate AI predictions, these thresholding rules or their subsequent modifications will serve the purpose of codifying the knowledge about the effects of mutations.

Sruthi Cheloor Kovilakam, Balaram Hemalatha, Prakash Meher K

2020-Nov-24

General General

Understanding satisfaction essentials of E-learning in higher education: A multi-generational cohort perspective.

In Heliyon

Despite the increasingly critical role of e-learning in higher education, there is limited understanding of the satisfaction essentials of multi-generational students' cohorts undertaking online courses. In this study, we examine the perceived value of educational experiences of multi-generational students' cohorts studying via an online learning management system (Moodle). The study analysed survey responses from multi-generational students (N = 611) on a core subject in an undergraduate business school programme. The results show that Generations X, Y and Z students produce different students' satisfaction levels in distinct components of the online programme; namely, course design, course delivery, course delivery environment and preference of the mode of delivery. Generations cohorts account for remarkable effects in the total satisfaction of students on the online learning programme. The results suggest that contextualising online teaching based on multi-generational students' cohort composition could be one strategy to enhance student learning experience and satisfaction.

Yawson David Eshun, Yamoah Fred Amofa

2020-Nov

Education, Gender studies, Human-computer interface, Machine learning, Online learning, Pedagogical issues, Teaching/learning strategies

Radiology Radiology

Validation of cervical vertebral maturation stages: Artificial intelligence vs human observer visual analysis.

In American journal of orthodontics and dentofacial orthopedics : official publication of the American Association of Orthodontists, its constituent societies, and the American Board of Orthodontics

INTRODUCTION : This study aimed to develop an artificial neural network (ANN) model for cervical vertebral maturation (CVM) analysis and validate the model's output with the results of human observers.

METHODS : A total of 647 lateral cephalograms were selected from patients with 10-30 years of chronological age (mean ± standard deviation, 15.36 ± 4.13 years). New software with a decision support system was developed for manual labeling of the dataset. A total of 26 points were marked on each radiograph. The CVM stages were saved on the basis of the final decision of the observer. Fifty-four image features were saved in text format. A new subset of 72 radiographs was created according to the classification result, and these 72 radiographs were visually evaluated by 4 observers. Weighted kappa (wκ) and Cohen's kappa (cκ) coefficients and percentage agreement were calculated to evaluate the compatibility of the results.

RESULTS : Intraobserver agreement ranges were as follows: wκ = 0.92-0.98, cκ = 0.65-0.85, and 70.8%-87.5%. Interobserver agreement ranges were as follows: wκ = 0.76-0.92, cκ = 0.4-0.65, and 50%-72.2%. Agreement between the ANN model and observers 1, 2, 3, and 4 were as follows: wκ = 0.85 (cκ = 0.52, 59.7%), wκ = 0.8 (cκ = 0.4, 50%), wκ = 0.87 (cκ = 0.55, 62.5%), and wκ = 0.91 (cκ = 0.53, 61.1%), respectively (P <0.001). An average of 58.3% agreement was observed between the ANN model and the human observers.

CONCLUSIONS : This study demonstrated that the developed ANN model performed close to, if not better than, human observers in CVM analysis. By generating new algorithms, automatic classification of CVM with artificial intelligence may replace conventional evaluation methods used in the future.

Amasya Hakan, Cesur Emre, Yıldırım Derya, Orhan Kaan

2020-Dec

General General

CPAS: the UK's national machine learning-based hospital capacity planning system for COVID-19.

In Machine learning

The coronavirus disease 2019 (COVID-19) global pandemic poses the threat of overwhelming healthcare systems with unprecedented demands for intensive care resources. Managing these demands cannot be effectively conducted without a nationwide collective effort that relies on data to forecast hospital demands on the national, regional, hospital and individual levels. To this end, we developed the COVID-19 Capacity Planning and Analysis System (CPAS)-a machine learning-based system for hospital resource planning that we have successfully deployed at individual hospitals and across regions in the UK in coordination with NHS Digital. In this paper, we discuss the main challenges of deploying a machine learning-based decision support system at national scale, and explain how CPAS addresses these challenges by (1) defining the appropriate learning problem, (2) combining bottom-up and top-down analytical approaches, (3) using state-of-the-art machine learning algorithms, (4) integrating heterogeneous data sources, and (5) presenting the result with an interactive and transparent interface. CPAS is one of the first machine learning-based systems to be deployed in hospitals on a national scale to address the COVID-19 pandemic-we conclude the paper with a summary of the lessons learned from this experience.

Qian Zhaozhi, Alaa Ahmed M, van der Schaar Mihaela

2020-Nov-24

Automated machine learning, COVID-19, Compartmental models, Gaussian processes, Healthcare, Resource planning

General General

Detecting functional field units from satellite images in smallholder farming systems using a deep learning based computer vision approach: A case study from Bangladesh.

In Remote sensing applications : society and environment

Improving agricultural productivity of smallholder farms (which are typically less than 2 ha) is key to food security for millions of people in developing nations. Knowledge of the size and location of crop fields forms the basis for crop statistics, yield forecasting, resource allocation, economic planning, and for monitoring the effectiveness of development interventions and investments. We evaluated three different full convolutional neural network (F-CNN) models (U-Net, SegNet, and DenseNet) with deep neural architecture to detect functional field boundaries from the very high resolution (VHR) WorldView-3 satellite imagery from Southern Bangladesh. The precision of the three F-CNN was up to 0.8, and among the three F-CNN models, the highest precision, recalls, and F-1 score was obtained using a DenseNet model. This architecture provided the highest area under the receiver operating characteristic (ROC) curve (AUC) when tested with independent images. We also found that 4-channel images (blue, green, red, and near-infrared) provided small gains in performance when compared to 3-channel images (blue, green, and red). Our results indicate the potential of using CNN based computer vision techniques to detect field boundaries of small, irregularly shaped agricultural fields.

Yang Ruoyu, Ahmed Zia U, Schulthess Urs C, Kamal Mustafa, Rai Rahul

2020-Nov

CNN, Deep learning, Field boundaries, Smallholder farming

General General

Alzheimer's Disease Classification With a Cascade Neural Network.

In Frontiers in public health

Classification of Alzheimer's Disease (AD) has been becoming a hot issue along with the rapidly increasing number of patients. This task remains tremendously challenging due to the limited data and the difficulties in detecting mild cognitive impairment (MCI). Existing methods use gait [or EEG (electroencephalogram)] data only to tackle this task. Although the gait data acquisition procedure is cheap and simple, the methods relying on gait data often fail to detect the slight difference between MCI and AD. The methods that use EEG data can detect the difference more precisely, but collecting EEG data from both HC (health controls) and patients is very time-consuming. More critically, these methods often convert EEG records into the frequency domain and thus inevitably lose the spatial and temporal information, which is essential to capture the connectivity and synchronization among different brain regions. This paper proposes a cascade neural network with two steps to achieve a faster and more accurate AD classification by exploiting gait and EEG data simultaneously. In the first step, we propose attention-based spatial temporal graph convolutional networks to extract the features from the skeleton sequences (i.e., gait) captured by Kinect (a commonly used sensor) to distinguish between HC and patients. In the second step, we propose spatial temporal convolutional networks to fully exploit the spatial and temporal information of EEG data and classify the patients into MCI or AD eventually. We collect gait and EEG data from 35 cognitively health controls, 35 MCI, and 17 AD patients to evaluate our proposed method. Experimental results show that our method significantly outperforms other AD diagnosis methods (91.07 vs. 68.18%) in the three-way AD classification task (HC, MCI, and AD). Moreover, we empirically found that the lower body and right upper limb are more important for the early diagnosis of AD than other body parts. We believe this interesting finding can be helpful for clinical researches.

You Zeng, Zeng Runhao, Lan Xiaoyong, Ren Huixia, You Zhiyang, Shi Xue, Zhao Shipeng, Guo Yi, Jiang Xin, Hu Xiping

2020

“Alzheimers disease”, EEG, automatic diagnosis, deep learning, gait

Radiology Radiology

Deep Efficient End-to-end Reconstruction (DEER) Network for Few-view Breast CT Image Reconstruction.

In IEEE access : practical innovations, open solutions

Breast CT provides image volumes with isotropic resolution in high contrast, enabling detection of small calcification (down to a few hundred microns in size) and subtle density differences. Since breast is sensitive to x-ray radiation, dose reduction of breast CT is an important topic, and for this purpose, few-view scanning is a main approach. In this article, we propose a Deep Efficient End-to-end Reconstruction (DEER) network for few-view breast CT image reconstruction. The major merits of our network include high dose efficiency, excellent image quality, and low model complexity. By the design, the proposed network can learn the reconstruction process with as few as O ( N ) parameters, where N is the side length of an image to be reconstructed, which represents orders of magnitude improvements relative to the state-of-the-art deep-learning-based reconstruction methods that map raw data to tomographic images directly. Also, validated on a cone-beam breast CT dataset prepared by Koning Corporation on a commercial scanner, our method demonstrates a competitive performance over the state-of-the-art reconstruction networks in terms of image quality. The source code of this paper is available at: https://github.com/HuidongXie/DEER.

Xie Huidong, Shan Hongming, Cong Wenxiang, Liu Chi, Zhang Xiaohua, Liu Shaohua, Ning Ruola, Wang G E

2020

Breast CT, Deep learning, Few-view CT, Low-dose CT, X-ray CT

Radiology Radiology

Semantic Segmentation of Smartphone Wound Images: Comparative Analysis of AHRF and CNN-Based Approaches.

In IEEE access : practical innovations, open solutions

Smartphone wound image analysis has recently emerged as a viable way to assess healing progress and provide actionable feedback to patients and caregivers between hospital appointments. Segmentation is a key image analysis step, after which attributes of the wound segment (e.g. wound area and tissue composition) can be analyzed. The Associated Hierarchical Random Field (AHRF) formulates the image segmentation problem as a graph optimization problem. Handcrafted features are extracted, which are then classified using machine learning classifiers. More recently deep learning approaches have emerged and demonstrated superior performance for a wide range of image analysis tasks. FCN, U-Net and DeepLabV3 are Convolutional Neural Networks used for semantic segmentation. While in separate experiments each of these methods have shown promising results, no prior work has comprehensively and systematically compared the approaches on the same large wound image dataset, or more generally compared deep learning vs non-deep learning wound image segmentation approaches. In this paper, we compare the segmentation performance of AHRF and CNN approaches (FCN, U-Net, DeepLabV3) using various metrics including segmentation accuracy (dice score), inference time, amount of training data required and performance on diverse wound sizes and tissue types. Improvements possible using various image pre- and post-processing techniques are also explored. As access to adequate medical images/data is a common constraint, we explore the sensitivity of the approaches to the size of the wound dataset. We found that for small datasets (< 300 images), AHRF is more accurate than U-Net but not as accurate as FCN and DeepLabV3. AHRF is also over 1000x slower. For larger datasets (> 300 images), AHRF saturates quickly, and all CNN approaches (FCN, U-Net and DeepLabV3) are significantly more accurate than AHRF.

Wagh Ameya, Jain Shubham, Mukherjee Apratim, Agu Emmanuel, Pedersen Peder, Strong Diane, Tulu Bengisu, Lindsay Clifford, Liu Ziyang

2020

Associative Hierarchical Random Fields, Contrast Limited Adaptive Histogram Equalization, Convolutional Neural Network, DeepLabV3, FCN, U-Net, Wound image analysis, chronic wounds, semantic segmentation

Pathology Pathology

Highly accurate colorectal cancer prediction model based on Raman spectroscopy using patient serum.

In World journal of gastrointestinal oncology

BACKGROUND : Colorectal cancer (CRC) is an important disease worldwide, accounting for the second highest number of cancer-related deaths and the third highest number of new cancer cases. The blood test is a simple and minimally invasive diagnostic test. However, there is currently no blood test that can accurately diagnose CRC.

AIM : To develop a comprehensive, spontaneous, minimally invasive, label-free, blood-based CRC screening technique based on Raman spectroscopy.

METHODS : We used Raman spectra recorded using 184 serum samples obtained from patients undergoing colonoscopies. Patients with malignant tumor histories as well as those with cancers in organs other than the large intestine were excluded. Consequently, the specific diseases of 184 patients were CRC (12), rectal neuroendocrine tumor (2), colorectal adenoma (68), colorectal hyperplastic polyp (18), and others (84). We used the 1064-nm wavelength laser for excitation. The power of the laser was set to 200 mW.

RESULTS : Use of the recorded Raman spectra as training data allowed the construction of a boosted tree CRC prediction model based on machine learning. Therefore, the generalized R2 values for CRC, adenomas, hyperplastic polyps, and neuroendocrine tumors were 0.9982, 0.9630, 0.9962, and 0.9986, respectively.

CONCLUSION : For machine learning using Raman spectral data, a highly accurate CRC prediction model with a high R2 value was constructed. We are currently planning studies to demonstrate the accuracy of this model with a large amount of additional data.

Ito Hiroaki, Uragami Naoyuki, Miyazaki Tomokazu, Yang William, Issha Kenji, Matsuo Kai, Kimura Satoshi, Arai Yuji, Tokunaga Hiromasa, Okada Saiko, Kawamura Machiko, Yokoyama Noboru, Kushima Miki, Inoue Haruhiro, Fukagai Takashi, Kamijo Yumi

2020-Nov-15

Blood, Colorectal cancer, Diagnosis, Machine learning, Raman spectroscopy, Serum

Radiology Radiology

DeepSEED: 3D Squeeze-and-Excitation Encoder-Decoder Convolutional Neural Networks for Pulmonary Nodule Detection.

In Proceedings. IEEE International Symposium on Biomedical Imaging

Pulmonary nodule detection plays an important role in lung cancer screening with low-dose computed tomography (CT) scans. It remains challenging to build nodule detection deep learning models with good generalization performance due to unbalanced positive and negative samples. In order to overcome this problem and further improve state-of-the-art nodule detection methods, we develop a novel deep 3D convolutional neural network with an Encoder-Decoder structure in conjunction with a region proposal network. Particularly, we utilize a dynamically scaled cross entropy loss to reduce the false positive rate and combat the sample imbalance problem associated with nodule detection. We adopt the squeeze-and-excitation structure to learn effective image features and utilize inter-dependency information of different feature maps. We have validated our method based on publicly available CT scans with manually labelled ground-truth obtained from LIDC/IDRI dataset and its subset LUNA16 with thinner slices. Ablation studies and experimental results have demonstrated that our method could outperform state-of-the-art nodule detection methods by a large margin.

Li Yuemeng, Fan Yong

2020-Apr

Deep convolutional networks, encoder-decoder, lung nodule detection, squeeze-and-excitation

Radiology Radiology

Improving Diagnosis of Autism Spectrum Disorder and Disentangling its Heterogeneous Functional Connectivity Patterns Using Capsule Networks.

In Proceedings. IEEE International Symposium on Biomedical Imaging

Functional connectivity (FC) analysis is an appealing tool to aid diagnosis and elucidate the neurophysiological underpinnings of autism spectrum disorder (ASD). Many machine learning methods have been developed to distinguish ASD patients from healthy controls based on FC measures and identify abnormal FC patterns of ASD. Particularly, several studies have demonstrated that deep learning models could achieve better performance for ASD diagnosis than conventional machine learning methods. Although promising classification performance has been achieved by the existing machine learning methods, they do not explicitly model heterogeneity of ASD, incapable of disentangling heterogeneous FC patterns of ASD. To achieve an improved diagnosis and a better understanding of ASD, we adopt capsule networks (CapsNets) to build classifiers for distinguishing ASD patients from healthy controls based on FC measures and stratify ASD patients into groups with distinct FC patterns. Evaluation results based on a large multi-site dataset have demonstrated that our method not only obtained better classification performance than state-of-the-art alternative machine learning methods, but also identified clinically meaningful subgroups of ASD patients based on their vectorized classification outputs of the CapsNets classification model.

Jiao Zhicheng, Li Hongming, Fan Yong

2020-Apr

Autism spectrum disorder, Capsule network, Functional connectivity, Heterogeneity

General General

Artificial intelligence method for predicting the maximum stress of an off-center casing under non-uniform ground stress with support vector machine.

In Science China. Technological sciences

** : The situation of an off-center casing under non-uniform ground stress can occur in the process of drilling a salt-gypsum formation, and the related casing stress calculation has not yet been solved analytically. In addition, the experimental equipment in many cases cannot meet the actual conditions and the experimental cost is very high. These comprehensive factors cause the existing casing design to not meet the actual conditions and cause casing deformation, affecting the drilling operation in Tarim oil field. The finite element method is the only effective method to solve this problem at present, but the re-modelling process is time-consuming because of the changes in the parameters, such as the cement properties, casing centrality, and the casing size. In this article, an artificial intelligence method based on support vector machine (SVM) to predict the maximum stress of an off-center casing under non-uniform ground stress has been proposed. After a program based on a radial basis function (RBF)-support vector regression (SVR) (ε-SVR) model was established and validated, we constructed a data sample with a capacity of 120 by using the finite element method, which could meet the demand of the nine-factor ε-SVR model to predict the maximum stress of the casing. The results showed that the artificial intelligence prediction method proposed in this manuscript had satisfactory prediction accuracy and could be effectively used to predict the maximum stress of an off-center casing under complex downhole conditions.

Electronic Supplementary Material : Supplementary material is available for this article at 10.1007/s11431-019-1694-4 and is accessible for authorized users.

Di QinFeng, Wu ZhiHao, Chen Tao, Chen Feng, Wang WenChang, Qin GuangXu, Chen Wei

2020-Nov-16

maximum stress, non-uniform ground stress, off-center casing, oil and gas wells, support vector machine

Surgery Surgery

Deep Convolutional Neural Network Based Interictal-Preictal Electroencephalography Prediction: Application to Focal Cortical Dysplasia Type-II.

In Frontiers in neurology

We aimed to differentiate between the interictal and preictal states in epilepsy patients with focal cortical dysplasia (FCD) type-II using deep learning-based classifiers based on intracranial electroencephalography (EEG). We also investigated the practical conditions for high interictal-preictal discriminability in terms of spatiotemporal EEG characteristics and data size efficiency. Intracranial EEG recordings of nine epilepsy patients with FCD type-II (four female, five male; mean age: 10.7 years) were analyzed. Seizure onset and channel ranking were annotated by two epileptologists. We performed three consecutive interictal-preictal classification steps by varying the preictal length, number of electrodes, and sampling frequency with convolutional neural networks (CNN) using 30 s time-frequency data matrices. Classification performances were evaluated based on accuracy, F1 score, precision, and recall with respect to the above-mentioned three parameters. We found that (1) a 5 min preictal length provided the best classification performance, showing a remarkable enhancement of >13% on average compared to that with the 120 min preictal length; (2) four electrodes provided considerably high classification performance with a decrease of only approximately 1% on average compared to that with all channels; and (3) there was minimal performance change when quadrupling the sampling frequency from 128 Hz. Patient-specific performance variations were noticeable with respect to the preictal length, and three patients showed above-average performance enhancements of >28%. However, performance enhancements were low with respect to both the number of electrodes and sampling frequencies, and some patients showed at most 1-2% performance change. CNN-based classifiers from intracranial EEG recordings using a small number of electrodes and efficient sampling frequency are feasible for predicting the interictal-preictal state transition preceding seizures in epilepsy patients with FCD type-II. Preictal lengths affect the predictability in a patient-specific manner; therefore, pre-examinations for optimal preictal length will be helpful in seizure prediction.

Chung Yoon Gi, Jeon Yonghoon, Choi Sun Ah, Cho Anna, Kim Hunmin, Hwang Hee, Kim Ki Joong

2020

convolutional neural networks, deep learning, epilepsy surgery, focal cortical dysplasia, seizure prediction

General General

Technology-Enabled Care: Integrating Multidisciplinary Care in Parkinson's Disease Through Digital Technology.

In Frontiers in neurology

Parkinson's disease (PD) management requires the involvement of movement disorders experts, other medical specialists, and allied health professionals. Traditionally, multispecialty care has been implemented in the form of a multidisciplinary center, with an inconsistent clinical benefit and health economic impact. With the current capabilities of digital technologies, multispecialty care can be reshaped to reach a broader community of people with PD in their home and community. Digital technologies have the potential to connect patients with the care team beyond the traditional sparse clinical visit, fostering care continuity and accessibility. For example, video conferencing systems can enable the remote delivery of multispecialty care. With big data analyses, wearable and non-wearable technologies using artificial intelligence can enable the remote assessment of patients' conditions in their natural home environment, promoting a more comprehensive clinical evaluation and empowering patients to monitor their disease. These advances have been defined as technology-enabled care (TEC). We present examples of TEC under development and describe the potential challenges to achieve a full integration of technology to address complex care needs in PD.

Luis-Martínez Raquel, Monje Mariana H G, Antonini Angelo, Sánchez-Ferro Álvaro, Mestre Tiago A

2020

“Parkinsons disease”, home care (HC), multidisciplinary care model, rehabilitation, technology

General General

Artificial intelligence and synthetic biology approaches for human gut microbiome.

In Critical reviews in food science and nutrition ; h5-index 70.0

The gut microbiome comprises a variety of microorganisms whose genes encode proteins to carry out crucial metabolic functions that are responsible for the majority of health-related issues in human beings. The advent of the technological revolution in artificial intelligence (AI) assisted synthetic biology (SB) approaches will play a vital role in the modulating the therapeutic and nutritive potential of probiotics. This can turn human gut as a reservoir of beneficial bacterial colonies having an immense role in immunity, digestion, brain function, and other health benefits. Hence, in the present review, we have discussed the role of several gene editing tools and approaches in synthetic biology that have equipped us with novel tools like Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR-Cas) systems to precisely engineer probiotics for diagnostic, therapeutic and nutritive value. A brief discussion over the AI techniques to understand the metagenomic data from the healthy and diseased gut microbiome is also presented. Further, the role of AI in potentially impacting the pace of developments in SB and its current challenges is also discussed. The review also describes the health benefits conferred by engineered microbes through the production of biochemicals, nutraceuticals, drugs or biotherapeutics molecules etc. Finally, the review concludes with the challenges and regulatory concerns in adopting synthetic biology engineered microbes for clinical applications. Thus, the review presents a synergistic approach of AI and SB toward human gut microbiome for better health which will provide interesting clues to researchers working in the area of rapidly evolving food and nutrition science.

Kumar Prasoon, Sinha Rajeshwari, Shukla Pratyoosh

2020-Nov-30

Artificial intelligence, CRISPR-Cas, gut microbiome, nutraceutical, probiotics, synthetic biology

General General

The automation of bias in medical Artificial Intelligence (AI): Decoding the past to create a better future.

In Artificial intelligence in medicine ; h5-index 34.0

Medicine is at a disciplinary crossroads. With the rapid integration of Artificial Intelligence (AI) into the healthcare field the future care of our patients will depend on the decisions we make now. Demographic healthcare inequalities continue to persist worldwide and the impact of medical biases on different patient groups is still being uncovered by the research community. At a time when clinical AI systems are scaled up in response to the Covid19 pandemic, the role of AI in exacerbating health disparities must be critically reviewed. For AI to account for the past and build a better future, we must first unpack the present and create a new baseline on which to develop these tools. The means by which we move forwards will determine whether we project existing inequity into the future, or whether we reflect on what we hold to be true and challenge ourselves to be better. AI is an opportunity and a mirror for all disciplines to improve their impact on society and for medicine the stakes could not be higher.

Straw Isabel

2020-Nov

Artificial intelligence, Bias, Data science, Digital health, Disparities, Health, Healthcare, Inequality, Medicine

Pathology Pathology

Development of an Al-Based Web Diagnostic System for Phenotyping Psychiatric Disorders.

In Frontiers in psychiatry

Background: Artificial intelligence (AI)-based medical diagnostic applications are on the rise. Our recent study has suggested an explainable deep neural network (EDNN) framework for identifying key structural deficits related to the pathology of schizophrenia. Here, we presented an AI-based web diagnostic system for schizophrenia under the EDNN framework with three-dimensional (3D) visualization of subjects' neuroimaging dataset. Methods: This AI-based web diagnostic system consisted of a web server and a neuroimaging diagnostic database. The web server deployed the EDNN algorithm under the Node.js environment. Feature selection and network model building were performed on the dataset obtained from two hundred schizophrenic patients and healthy controls in the Taiwan Aging and Mental Illness (TAMI) cohort. We included an independent cohort with 88 schizophrenic patients and 44 healthy controls recruited at Tri-Service General Hospital Beitou Branch for validation purposes. Results: Our AI-based web diagnostic system achieved 84.00% accuracy (89.47% sensitivity, 80.62% specificity) for gray matter (GM) and 90.22% accuracy (89.21% sensitivity, 91.23% specificity) for white matter (WM) on the TAMI cohort. For the Beitou cohort as an unseen test set, the model achieved 77.27 and 70.45% accuracy for GM and WM. Furthermore, it achieved 85.50 and 88.20% accuracy after model retraining to mitigate the effects of drift on the predictive capability. Moreover, our system visualized the identified voxels in brain atrophy in a 3D manner with patients' structural image, optimizing the evaluation process of the diagnostic results. Discussion: Together, our approach under the EDNN framework demonstrated the potential future direction of making a schizophrenia diagnosis based on structural brain imaging data. Our deep learning model is explainable, arguing for the accuracy of the key information related to the pathology of schizophrenia when using the AI-based web assessment platform. The rationale of this approach is in accordance with the Research Domain Criteria suggested by the National Institute of Mental Health.

Chang Yu-Wei, Tsai Shih-Jen, Wu Yung-Fu, Yang Albert C

2020

classification, explainable deep neural network, neuroimaging, schizophrenia, structural MRI

Surgery Surgery

A Pilot Study on Data-Driven Adaptive Deep Brain Stimulation in Chronically Implanted Essential Tremor Patients.

In Frontiers in human neuroscience ; h5-index 79.0

Deep brain stimulation (DBS) is an established therapy for Parkinson's disease (PD) and essential-tremor (ET). In adaptive DBS (aDBS) systems, online tuning of stimulation parameters as a function of neural signals may improve treatment efficacy and reduce side-effects. State-of-the-art aDBS systems use symptom surrogates derived from neural signals-so-called neural markers (NMs)-defined on the patient-group level, and control strategies assuming stationarity of symptoms and NMs. We aim at improving these aDBS systems with (1) a data-driven approach for identifying patient- and session-specific NMs and (2) a control strategy coping with short-term non-stationary dynamics. The two building blocks are implemented as follows: (1) The data-driven NMs are based on a machine learning model estimating tremor intensity from electrocorticographic signals. (2) The control strategy accounts for local variability of tremor statistics. Our study with three chronically implanted ET patients amounted to five online sessions. Tremor quantified from accelerometer data shows that symptom suppression is at least equivalent to that of a continuous DBS strategy in 3 out-of 4 online tests, while considerably reducing net stimulation (at least 24%). In the remaining online test, symptom suppression was not significantly different from either the continuous strategy or the no treatment condition. We introduce a novel aDBS system for ET. It is the first aDBS system based on (1) a machine learning model to identify session-specific NMs, and (2) a control strategy coping with short-term non-stationary dynamics. We show the suitability of our aDBS approach for ET, which opens the door to its further study in a larger patient population.

Castaño-Candamil Sebastián, Ferleger Benjamin I, Haddock Andrew, Cooper Sarah S, Herron Jeffrey, Ko Andrew, Chizeck Howard J, Tangermann Michael

2020

adaptive deep brain stimulation, closed-loop deep brain stimulation, deep brain stimulation, essential tremor, machine learning, neural decoding

General General

Engrams of Fast Learning.

In Frontiers in cellular neuroscience ; h5-index 74.0

Fast learning designates the behavioral and neuronal mechanisms underlying the acquisition of a long-term memory trace after a unique and brief experience. As such it is opposed to incremental, slower reinforcement or procedural learning requiring repetitive training. This learning process, found in most animal species, exists in a large spectrum of natural behaviors, such as one-shot associative, spatial, or perceptual learning, and is a core principle of human episodic memory. We review here the neuronal and synaptic long-term changes associated with fast learning in mammals and discuss some hypotheses related to their underlying mechanisms. We first describe the variety of behavioral paradigms used to test fast learning memories: those preferentially involve a single and brief (from few hundred milliseconds to few minutes) exposures to salient stimuli, sufficient to trigger a long-lasting memory trace and new adaptive responses. We then focus on neuronal activity patterns observed during fast learning and the emergence of long-term selective responses, before documenting the physiological correlates of fast learning. In the search for the engrams of fast learning, a growing body of evidence highlights long-term changes in gene expression, structural, intrinsic, and synaptic plasticities. Finally, we discuss the potential role of the sparse and bursting nature of neuronal activity observed during the fast learning, especially in the induction plasticity mechanisms leading to the rapid establishment of long-term synaptic modifications. We conclude with more theoretical perspectives on network dynamics that could enable fast learning, with an overview of some theoretical approaches in cognitive neuroscience and artificial intelligence.

Piette Charlotte, Touboul Jonathan, Venance Laurent

2020

artificial intelligence, fast learning, memory engram, neurocomputational models, neuromodulation, one-shot learning (OSL), synaptic plasticity (LTP/LTD)

General General

COVID-CheXNet: hybrid deep learning framework for identifying COVID-19 virus in chest X-rays images.

In Soft computing

The outbreaks of Coronavirus (COVID-19) epidemic have increased the pressure on healthcare and medical systems worldwide. The timely diagnosis of infected patients is a critical step to limit the spread of the COVID-19 epidemic. The chest radiography imaging has shown to be an effective screening technique in diagnosing the COVID-19 epidemic. To reduce the pressure on radiologists and control of the epidemic, fast and accurate a hybrid deep learning framework for diagnosing COVID-19 virus in chest X-ray images is developed and termed as the COVID-CheXNet system. First, the contrast of the X-ray image was enhanced and the noise level was reduced using the contrast-limited adaptive histogram equalization and Butterworth bandpass filter, respectively. This was followed by fusing the results obtained from two different pre-trained deep learning models based on the incorporation of a ResNet34 and high-resolution network model trained using a large-scale dataset. Herein, the parallel architecture was considered, which provides radiologists with a high degree of confidence to discriminate between the healthy and COVID-19 infected people. The proposed COVID-CheXNet system has managed to correctly and accurately diagnose the COVID-19 patients with a detection accuracy rate of 99.99%, sensitivity of 99.98%, specificity of 100%, precision of 100%, F1-score of 99.99%, MSE of 0.011%, and RMSE of 0.012% using the weighted sum rule at the score-level. The efficiency and usefulness of the proposed COVID-CheXNet system are established along with the possibility of using it in real clinical centers for fast diagnosis and treatment supplement, with less than 2 s per image to get the prediction result.

Al-Waisy Alaa S, Al-Fahdawi Shumoos, Mohammed Mazin Abed, Abdulkareem Karrar Hameed, Mostafa Salama A, Maashi Mashael S, Arif Muhammad, Garcia-Zapirain Begonya

2020-Nov-21

Chest X-ray images, Chest radiography imaging, Coronavirus COVID-19 epidemic, Deep learning, ResNet34 model, Transfer learning

Radiology Radiology

CoroDet: A deep learning based classification for COVID-19 detection using chest X-ray images.

In Chaos, solitons, and fractals

Background and Objective : The Coronavirus 2019, or shortly COVID-19, is a viral disease that causes serious pneumonia and impacts our different body parts from mild to severe depending on patient's immune system. This infection was first reported in Wuhan city of China in December 2019, and afterward, it became a global pandemic spreading rapidly around the world. As the virus spreads through human to human contact, it has affected our lives in a devastating way, including the vigorous pressure on the public health system, the world economy, education sector, workplaces, and shopping malls. Preventing viral spreading requires early detection of positive cases and to treat infected patients as quickly as possible. The need for COVID-19 testing kits has increased, and many of the developing countries in the world are facing a shortage of testing kits as new cases are increasing day by day. In this situation, the recent research using radiology imaging (such as X-ray and CT scan) techniques can be proven helpful to detect COVID-19 as X-ray and CT scan images provide important information about the disease caused by COVID-19 virus. The latest data mining and machine learning techniques such as Convolutional Neural Network (CNN) can be applied along with X-ray and CT scan images of the lungs for the accurate and rapid detection of the disease, assisting in mitigating the problem of scarcity of testing kits.

Methods : Hence a novel CNN model called CoroDet for automatic detection of COVID-19 by using raw chest X-ray and CT scan images have been proposed in this study. CoroDet is developed to serve as an accurate diagnostics for 2 class classification (COVID and Normal), 3 class classification (COVID, Normal, and non-COVID pneumonia), and 4 class classification (COVID, Normal, non-COVID viral pneumonia, and non-COVID bacterial pneumonia).

Results : The performance of our proposed model was compared with ten existing techniques for COVID detection in terms of accuracy. A classification accuracy of 99.1% for 2 class classification, 94.2% for 3 class classification, and 91.2% for 4 class classification was produced by our proposed model, which is obviously better than the state-of-the-art-methods used for COVID-19 detection to the best of our knowledge. Moreover, the dataset with x-ray images that we prepared for the evaluation of our method is the largest datasets for COVID detection as far as our knowledge goes.

Conclusion : The experimental results of our proposed method CoroDet indicate the superiority of CoroDet over the existing state-of-the-art-methods. CoroDet may assist clinicians in making appropriate decisions for COVID-19 detection and may also mitigate the problem of scarcity of testing kits.

Hussain Emtiaz, Hasan Mahmudul, Rahman Md Anisur, Lee Ickjai, Tamanna Tasmi, Parvez Mohammad Zavid

2020-Nov-23

Accuracy, COVID-19, Confusion matrix, Convolutional neural network, Deep learning, Pneumonia-bacterial, Pneumonia-viral, X-ray

General General

The Nooscope manifested: AI as instrument of knowledge extractivism.

In AI & society

Some enlightenment regarding the project to mechanise reason. The assembly line of machine learning: data, algorithm, model. The training dataset: the social origins of machine intelligence. The history of AI as the automation of perception. The learning algorithm: compressing the world into a statistical model. All models are wrong, but some are useful. World to vector: the society of classification and prediction bots. Faults of a statistical instrument: the undetection of the new. Adversarial intelligence vs. statistical intelligence: labour in the age of AI.

Pasquinelli Matteo, Joler Vladan

2020-Nov-21

Ethical machine learning, Information compression, Mechanised knowledge, Nooscope, Political economy

General General

CPAS: the UK's national machine learning-based hospital capacity planning system for COVID-19.

In Machine learning

The coronavirus disease 2019 (COVID-19) global pandemic poses the threat of overwhelming healthcare systems with unprecedented demands for intensive care resources. Managing these demands cannot be effectively conducted without a nationwide collective effort that relies on data to forecast hospital demands on the national, regional, hospital and individual levels. To this end, we developed the COVID-19 Capacity Planning and Analysis System (CPAS)-a machine learning-based system for hospital resource planning that we have successfully deployed at individual hospitals and across regions in the UK in coordination with NHS Digital. In this paper, we discuss the main challenges of deploying a machine learning-based decision support system at national scale, and explain how CPAS addresses these challenges by (1) defining the appropriate learning problem, (2) combining bottom-up and top-down analytical approaches, (3) using state-of-the-art machine learning algorithms, (4) integrating heterogeneous data sources, and (5) presenting the result with an interactive and transparent interface. CPAS is one of the first machine learning-based systems to be deployed in hospitals on a national scale to address the COVID-19 pandemic-we conclude the paper with a summary of the lessons learned from this experience.

Qian Zhaozhi, Alaa Ahmed M, van der Schaar Mihaela

2020-Nov-24

Automated machine learning, COVID-19, Compartmental models, Gaussian processes, Healthcare, Resource planning

General General

Applying advanced technologies to improve clinical trials: a systematic mapping study.

In Scientometrics

The increasing demand for new therapies and other clinical interventions has made researchers conduct many clinical trials. The high level of evidence generated by clinical trials makes them the main approach to evaluating new clinical interventions. The increasing amounts of data to be considered in the planning and conducting of clinical trials has led to higher costs and increased timelines of clinical trials, with low productivity. Advanced technologies including artificial intelligence, machine learning, deep learning, and the internet of things offer an opportunity to improve the efficiency and productivity of clinical trials at various stages. Although researchers have done some tangible work regarding the application of advanced technologies in clinical trials, the studies are yet to be mapped to give a general picture of the current state of research. This systematic mapping study was conducted to identify and analyze studies published on the role of advanced technologies in clinical trials. A search restricted to the period between 2010 and 2020 yielded a total of 443 articles. The analysis revealed a trend of increasing research interests in the area over the years. Recruitment and eligibility aspects were the main focus of the studies. The main research types were validation and evaluation studies. Most studies contributed methods and theories, hence there exists a gap for architecture, process, and metric contributions. In the future, more empirical studies are expected given the increasing interest to implement the AI, ML, DL, and IoT in clinical trials.

Ngayua Esther Nanzayi, He Jianjia, Agyei-Boahene Kwabena

2020-Nov-21

Artificial intelligence, Clinical trials, Deep learning, Internet of things, Machine learning

General General

Automatic detection and segmentation of lumbar vertebrae from X-ray images for compression fracture evaluation.

In Computer methods and programs in biomedicine

For compression fracture detection and evaluation, an automatic X-ray image segmentation technique that combines deep-learning and level-set methods is proposed. Automatic segmentation is much more difficult for X-ray images than for CT or MRI images because they contain overlapping shadows of thoracoabdominal structures including lungs, bowel gases, and other bony structures such as ribs. Additional difficulties include unclear object boundaries, the complex shape of the vertebra, inter-patient variability, and variations in image contrast. Accordingly, a structured hierarchical segmentation method is presented that combines the advantages of two deep-learning methods. Pose-driven learning is used to selectively identify the five lumbar vertebrae in an accurate and robust manner. With knowledge of the vertebral positions, M-net is employed to segment the individual vertebra. Finally, fine-tuning segmentation is applied by combining the level-set method with the previously obtained segmentation results. The performance of the proposed method was validated by 160 lumbar X-ray images, resulting in a mean Dice similarity metric of 91.60±2.22%. The results show that the proposed method achieves accurate and robust identification of each lumbar vertebra and fine segmentation of individual vertebra.

Kim Kang Cheol, Cho Hyun Cheol, Jang Tae Jun, Choi Jong Mun, Seo Jin Keun

2020-Nov-11

Deep learning, Level-set, Lumbar X-ray, Vertebra detection, Vertebra segmentation

General General

A Machine Learning decision-making tool for extubation in Intensive Care Unit patients.

In Computer methods and programs in biomedicine

BACKGROUND AND OBJECTIVE : To increase the success rate of invasive mechanical ventilation weaning in critically ill patients using Machine Learning models capable of accurately predicting the outcome of programmed extubations.

METHODS : The study population was adult patients admitted to the Intensive Care Unit. Target events were programmed extubations, both successful and failed. The working dataset is assembled by combining heterogeneous data including time series from Clinical Information Systems, patient demographics, medical records and respiratory event logs. Three classification learners have been compared: Logistic Discriminant Analysis, Gradient Boosting Method and Support Vector Machines. Standard methodologies have been used for preprocessing, hyperparameter tuning and resampling.

RESULTS : The Support Vector Machine classifier is found to correctly predict the outcome of an extubation with a 94.6% accuracy. Contrary to current decision-making criteria for extubation based on Spontaneous Breathing Trials, the classifier predictors only require monitor data, medical entry records and patient demographics.

CONCLUSIONS : Machine Learning-based tools have been found to accurately predict the extubation outcome in critical patients with invasive mechanical ventilation. The use of this important predictive capability to assess the extubation decision could potentially reduce the rate of extubation failure, currently at 9%. With about 40% of critically ill patients eventually receiving invasive mechanical ventilation during their stay and given the serious potential complications associated to reintubation, the excellent predictive ability of the model presented here suggests that Machine Learning techniques could significantly improve the clinical outcomes of critical patients.

Fabregat Alexandre, Magret Mónica, Ferré Josep Anton, Vernet Anton, Guasch Neus, Rodríguez Alejandro, Gómez Josep, Bodí María

2020-Nov-24

Clinical decision support tool, Extubation, Gradient Boosting, Invasive mechanical ventilation, Machine Learning, Reintubation, Support Vector Machine

Public Health Public Health

Identifying environmental exposure profiles associated with timing of menarche: A two-step machine learning approach to examine multiple environmental exposures.

In Environmental research ; h5-index 67.0

BACKGROUND : Variation in the timing of menarche has been linked with adverse health outcomes in later life. There is evidence that exposure to hormonally active agents (or endocrine disrupting chemicals; EDCs) during childhood may play a role in accelerating or delaying menarche. The goal of this study was to generate hypotheses on the relationship between exposure to multiple EDCs and timing of menarche by applying a two-stage machine learning approach.

METHODS : We used data from the National Health and Nutrition Examination Survey (NHANES) for years 2005-2008. Data were analyzed for 229 female participants 12-16 years of age who had blood and urine biomarker measures of 41 environmental exposures, all with >70% above limit of detection, in seven classes of chemicals. We modeled risk for earlier menarche (<12 years of age vs older) with exposure biomarkers. We applied a two-stage approach consisting of a random forest (RF) to identify important exposure combinations associated with timing of menarche followed by multivariable modified Poisson regression to quantify associations between exposure profiles ("combinations") and timing of menarche.

RESULTS : RF identified urinary concentrations of monoethylhexyl phthalate (MEHP) as the most important feature in partitioning girls into homogenous subgroups followed by bisphenol A (BPA) and 2,4-dichlorophenol (2,4-DCP). In this first stage, we identified 11 distinct exposure biomarker profiles, containing five different classes of EDCs associated with earlier menarche. MEHP appeared in all 11 exposure biomarker profiles and phenols appeared in five. Using these profiles in the second-stage of analysis, we found a relationship between lower MEHP and earlier menarche (MEHP ≤ 2.36 ng/mL vs >2.36 ng/mL: adjusted PR= 1.36, 95% CI: 1.02, 1.80). Combinations of lower MEHP with benzophenone-3, 2,4-DCP, and BPA had similar associations with earlier menarche, though slightly weaker in those smaller subgroups. For girls not having lower MEHP, exposure profiles included other biomarkers (BPA, enterodiol, monobenzyl phthalate, triclosan, and 1-hydroxypyrene); these showed largely null associations in the second-stage analysis. Adjustment for covariates did not materially change the estimates or CIs of these models. We observed weak or null effect estimates for some exposure biomarker profiles and relevant profiles consisted of no more than two EDCs, possibly due to small sample sizes in subgroups.

CONCLUSION : A two-stage approach incorporating machine learning was able to identify interpretable combinations of biomarkers in relation to timing of menarche; these should be further explored in prospective studies. Machine learning methods can serve as a valuable tool to identify patterns within data and generate hypotheses that can be investigated within future, targeted analyses.

Oskar Sabine, Wolff Mary S, Teitelbaum Susan L, Stingone Jeanette A

2020-Nov-26

Environmental exposures, Machine learning, Menarche, Mixtures, Multiple exposures

Cardiology Cardiology

CT Angiographic and Plaque Predictors of Functionally Significant Coronary Disease and Outcome Using Machine Learning.

In JACC. Cardiovascular imaging

OBJECTIVES : The goal of this study was to investigate the association of stenosis and plaque features with myocardial ischemia and their prognostic implications.

BACKGROUND : Various anatomic, functional, and morphological attributes of coronary artery disease (CAD) have been independently explored to define ischemia and prognosis.

METHODS : A total of 1,013 vessels with fractional flow reserve (FFR) measurement and available coronary computed tomography angiography were analyzed. Stenosis and plaque features of the target lesion and vessel were evaluated by an independent core laboratory. Relevant features associated with low FFR (≤0.80) were identified by using machine learning, and their predictability of 5-year risk of vessel-oriented composite outcome, including cardiac death, target vessel myocardial infarction, or target vessel revascularization, were evaluated.

RESULTS : The mean percent diameter stenosis and invasive FFR were 48.5 ± 17.4% and 0.81 ± 0.14, respectively. Machine learning interrogation identified 6 clusters for low FFR, and the most relevant feature from each cluster was minimum lumen area, percent atheroma volume, fibrofatty and necrotic core volume, plaque volume, proximal left anterior descending coronary artery lesion, and remodeling index (in order of importance). These 6 features showed predictability for low FFR (area under the receiver-operating characteristic curve: 0.797). The risk of 5-year vessel-oriented composite outcome increased with every increment of the number of 6 relevant features, and it had incremental prognostic value over percent diameter stenosis and FFR (area under the receiver-operating characteristic curve: 0.706 vs. 0.611; p = 0.031).

CONCLUSIONS : Six functionally relevant features, including minimum lumen area, percent atheroma volume, fibrofatty and necrotic core volume, plaque volume, proximal left anterior descending coronary artery lesion, and remodeling index, help define the presence of myocardial ischemia and provide better prognostication in patients with CAD. (CCTA-FFR Registry for Risk Prediction; NCT04037163).

Yang Seokhun, Koo Bon-Kwon, Hoshino Masahiro, Lee Joo Myung, Murai Tadashi, Park Jiesuck, Zhang Jinlong, Hwang Doyeon, Shin Eun-Seok, Doh Joon-Hyung, Nam Chang-Wook, Wang Jianan, Chen Shaoliang, Tanaka Nobuhiro, Matsuo Hitoshi, Akasaka Takashi, Choi Gilwoo, Petersen Kersten, Chang Hyuk-Jae, Kakuta Tsunekazu, Narula Jagat

2020-Nov-19

atherosclerosis, coronary artery disease, coronary computed tomography angiography, coronary plaque, fractional flow reserve, ischemia

General General

Deep learning for species identification of bolete mushrooms with two-dimensional correlation spectral (2DCOS) images.

In Spectrochimica acta. Part A, Molecular and biomolecular spectroscopy

Bolete is well-known and widely consumed mushroom in the world. However, its medicinal properties and nutritional are completely different from one species to another. Therefore, the consumers need a fast and effective detection method to discriminate their species. A new method using directly digital images of two-dimensional correlation spectroscopy (2DCOS) for the species discrimination with deep learning is proposed in this paper. In our study, a total of 2054 fruiting bodies of 21 wild-grown bolete species were collected in 52 regions from 2011 to 2014. Firstly, we intercepted 1750-400 cm-1 fingerprint regions of each species from their mid-infrared (MIR) spectra, and converted them to 2DCOS spectra with matlab2017b. At the same time, we developed a specific method for the calculation of the 2DCOS spectra. Secondly, we established a deep residual convolutional neural network (Resnet) with 1848 (90%) 2DCOS spectral images. Therein, the discrimination of the bolete species using directly 2DCOS spectral images instead of data matric from the spectra was first to be reported. The results displayed that the respective identification accuracy of these samples was 100% in the training set and 99.76% in the test set. Then, 203 samples were accurately discriminated in 206 (10%) samples of external validation set. Thirdly, we employed t-SNE method to visualize and evaluate the spectral dataset. The result indicated that most samples can be clustered according to different species. Finally, a smartphone applications (APP) was developed based on the established 2DCOS spectral images strategy, which can make the discrimination of bolete mushrooms more easily in practice. In conclusion, deep learning method by using directly 2DCOS spectral image was considered to be an innovative and feasible way for the species discrimination of bolete mushrooms. Moreover, this method may be generalized to other edible mushrooms, food, herb and agricultural products in the further research.

Dong Jian-E, Zhang Ji, Zuo Zhi-Tian, Wang Yuan-Zhong

2020-Nov-14

Application (APP), Bolete, Deep learning, Residual convolutional neural network (Resnet), Species discrimination, Two-dimensional correlation spectroscopy (2DCOS)

General General

Deep learning for the prediction of treatment response in depression.

In Journal of affective disorders ; h5-index 79.0

BACKGROUND : Mood disorders are characterized by heterogeneity in severity, symptoms and treatment response. The possibility of selecting the correct therapy on the basis of patient-specific biomarker may be a considerable step towards personalized psychiatry. Machine learning methods are gaining increasing popularity in the medical field. Once trained, the possibility to consider single patients in the analyses instead of whole groups makes them particularly appealing to investigate treatment response. Deep learning, a branch of machine learning, lately gained attention, due to its effectiveness in dealing with large neuroimaging data and to integrate them with clinical, molecular or -omics biomarkers.

METHODS : In this mini-review, we summarize studies that use deep learning methods to predict response to treatment in depression. We performed a bibliographic search on PUBMED, Google Scholar and Web of Science using the terms "psychiatry", "mood disorder", "depression", "treatment", "deep learning", "neural networks". Only studies considering patients' datasets are considered.

RESULTS : Eight studies met the inclusion criteria. Accuracies in prediction of response to therapy were considerably high in all studies, but results may be not easy to interpret.

LIMITATIONS : The major limitation for the current studies is the small sample size, which constitutes an issue for machine learning methods.

CONCLUSIONS : Deep learning shows promising results in terms of prediction of treatment response, often outperforming regression methods and reaching accuracies of around 80%. This could be of great help towards personalized medicine. However, more efforts are needed in terms of increasing datasets size and improved interpretability of results.

Squarcina Letizia, Villa Filippo Maria, Nobile Maria, Grisan Enrico, Brambilla Paolo

2020-Nov-17

Surgery Surgery

Prospectively Assigned AAST Grade versus Modified Hinchey Class and Acute Diverticulitis Outcomes.

In The Journal of surgical research

BACKGROUND : The American Association for the Surgery of Trauma (AAST) recently developed a classification system to standardize outcomes analyses for several emergency general surgery conditions. To highlight this system's full potential, we conducted a study integrating prospective AAST grade assignment within the electronic medical record.

METHODS : Our institution integrated AAST grade assignment into our clinical workflow in July 2018. Patients with acute diverticulitis were prospectively assigned AAST grades and modified Hinchey classes at the time of surgical consultation. Support vector machine-a machine learning algorithm attuned for small sample sizes-was used to compare the associations between the two classification systems and decision to operate and incidence of complications.

RESULTS : 67 patients were included (median age of 62 y, 40% male) for analysis. The decision for operative management, hospital length of stay, intensive care unit admission, and intensive care unit length of stay were associated with both increasing AAST grade and increasing modified Hinchey class (all P < 0.001). AAST grade additionally showed a correlation with complication severity (P = 0.02). Compared with modified Hinchey class, AAST grade better predicted decision to operate (88.2% versus 82.4%).

CONCLUSIONS : This study showed the feasibility of electronic medical record integration to support the full potential of AAST classification system's utility as a clinical decision-making tool. Prospectively assigned AAST grade may be an accurate and pragmatic method to find associations with outcomes, yet validation requires further study.

Choi Jeff, Bessoff Kovi, Bromley-Dulfano Rebecca, Li Zelin, Gupta Anshal, Taylor Kathryn, Wadhwa Harsh, Seltzer Ryan, Spain David A, Knowlton Lisa M

2020-Nov-25

AAST grade, Acute diverticulitis, Electronic medical record, Grading system, Hinchey class

General General

Sensitivity analysis based on the random forest machine learning algorithm identifies candidate genes for regulation of innate and adaptive immune response of chicken.

In Poultry science

Two categories of immune responses-innate and adaptive immunity-have both polygenic backgrounds and a significant environmental component. The goal of the reported study was to define candidate genes and mutations for the immune traits of interest in chickens using machine learning-based sensitivity analysis for single-nucleotide polymorphisms (SNPs) located in candidate genes defined in quantitative trait loci regions. Here the adaptive immunity is represented by the specific antibody response toward keyhole limpet hemocyanin (KLH), whereas the innate immunity was represented by natural antibodies toward lipopolysaccharide (LPS) and lipoteichoic acid (LTA). The analysis consisted of 3 basic steps: an identification of candidate SNPs via feature selection, an optimisation of the feature set using recursive feature elimination, and finally a gene-level sensitivity analysis for final selection of models. The predictive model based on 5 genes (MAPK8IP3 CRLF3, UNC13D, ILR9, and PRCKB) explains 14.9% of variance for KLH adaptive response. The models obtained for LTA and LPS use more genes and have lower predictive power, explaining respectively 7.8 and 4.5% of total variance. In comparison, the linear models built on genes identified by a standard statistical analysis explain 1.5, 0.5, and 0.3% of variance for KLH, LTA, and LPS response, respectively. The present study shows that machine learning methods applied to systems with a complex interaction network can discover phenotype-genotype associations with much higher sensitivity than traditional statistical models. It adds contribution to evidence suggesting a role of MAPK8IP3 in the adaptive immune response. It also indicates that CRLF3 is involved in this process as well. Both findings need additional verification.

Polewko-Klim Aneta, Lesiński Wojciech, Golińska Agnieszka Kitlas, Mnich Krzysztof, Siwek Maria, Rudnicki Witold R

2020-Dec

chicken, immune response, machine learning, marker gene

Cardiology Cardiology

Machine Learning Improves the Identification of Individuals With Higher Morbidity and Avoidable Health Costs After Acute Coronary Syndromes.

In Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research

OBJECTIVES : Traditional risk scores improved the definition of the initial therapeutic strategy in acute coronary syndrome (ACS), but they were not designed for predicting long-term individual risks and costs. In parallel, attempts to directly predict costs from clinical variables in ACS had limited success. Thus, novel approaches to predict cardiovascular risk and health expenditure are urgently needed. Our objectives were to predict the risk of major/minor adverse cardiovascular events (MACE) and estimate assistance-related costs.

METHODS : We used a 2-step approach that: (1) predicted outcomes with a common pathophysiological substrate (MACE) by using machine learning (ML) or logistic regression (LR) and compared with existing risk scores; (2) derived costs associated with noncardiovascular deaths, dialysis, ambulatory-care-sensitive-hospitalizations (ACSH), strokes, and MACE. With consecutive ACS individuals (n = 1089) from 2 cohorts, we trained in 80% of the population and tested in 20% using a 4-fold cross-validation framework. The 29-variable model included socioeconomic, clinical/lab, and coronarography variables. Individual costs were estimated based on cause-specific hospitalization from the Brazilian Health Ministry perspective.

RESULTS : After up to 12 years follow-up (mean = 3.3 ± 3.1; MACE = 169), the gradient-boosting machine model was superior to LR and reached an area under the curve (AUROC) of 0.891 [95% CI 0.846-0.921] (test set), outperforming the Syntax Score II (AUROC = 0.635 [95% CI 0.569-0.699]). Individuals classified as high risk (>90th percentile) presented increased HbA1c and LDL-C both at <24 hours post-ACS and 1-year follow-up. High-risk individuals required 33.5% of total costs and showed 4.96-fold (95% CI 3.71-5.48, P < .00001) greater per capita costs compared with low-risk individuals, mostly owing to avoidable costs (ACSH). This 2-step approach was more successful for finding individuals incurring high costs than predicting costs directly from clinical variables.

CONCLUSION : ML methods predicted long-term risks and avoidable costs after ACS.

de Carvalho Luiz Sérgio Fernandes, Gioppato Silvio, Fernandez Marta Duran, Trindade Bernardo Carvalho, Silva José Carlos Quinaglia E, Miranda Rebeca Gouget Sérgio, de Souza José Roberto Matos, Nadruz Wilson, Avila Sandra Eliza Fontes, Sposito Andrei Carvalho

2020-Dec

acute coronary syndromes, artificial intelligence, machine learning, modifiable risk factors, population health management

Radiology Radiology

MAVIDH Score: A Corona Severity Scoring using Interpretable Chest X-Ray Pathology Features

ArXiv Preprint

The application of computer vision for COVID-19 diagnosis is complex and challenging, given the risks associated with patient misclassifications. Arguably, the primary value of medical imaging for COVID-19 lies rather on patient prognosis. Radiological images can guide physicians assessing the severity of the disease, and a series of images from the same patient at different stages can help to gauge disease progression. Based on these premises, a simple method based on lung-pathology features for scoring disease severity from Chest X-rays is proposed here. As the primary contribution, this method shows to be correlated to patient severity in different stages of disease progression comparatively well when contrasted with other existing methods. An original approach for data selection is also proposed, allowing the simple model to learn the severity-related features. It is hypothesized that the resulting competitive performance presented here is related to the method being feature-based rather than reliant on lung involvement or compromise as others in the literature. The fact that it is simpler and interpretable than other end-to-end, more complex models, also sets aside this work. As the data set is small, bias-inducing artifacts that could lead to overfitting are minimized through an image normalization and lung segmentation step at the learning phase. A second contribution comes from the validation of the results, conceptualized as the scoring of patients groups from different stages of the disease. Besides performing such validation on an independent data set, the results were also compared with other proposed scoring methods in the literature. The expressive results show that although imaging alone is not sufficient for assessing severity as a whole, there is a strong correlation with the scoring system, termed as MAVIDH score, with patient outcome.

Douglas P. S. Gomes, Michael J. Horry, Anwaar Ulhaq, Manoranjan Paul, Subrata Chakraborty, Manash Saha, Tanmoy Debnath, D. M. Motiur Rahaman

2020-11-30

Public Health Public Health

Prioritizing Additional Data Collection to Reduce Decision Uncertainty in the HIV/AIDS Response in 6 US Cities: A Value of Information Analysis.

In Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research

OBJECTIVES : The ambitious goals of the US Ending the HIV Epidemic initiative will require a targeted, context-specific public health response. Model-based economic evaluation provides useful guidance for decision making while characterizing decision uncertainty. We aim to quantify the value of eliminating uncertainty about different parameters in selecting combination implementation strategies to reduce the public health burden of HIV/AIDS in 6 US cities and identify future data collection priorities.

METHODS : We used a dynamic compartmental HIV transmission model developed for 6 US cities to evaluate the cost-effectiveness of a range of combination implementation strategies. Using a metamodeling approach with nonparametric and deep learning methods, we calculated the expected value of perfect information, representing the maximum value of further research to eliminate decision uncertainty, and the expected value of partial perfect information for key groups of parameters that would be collected together in practice.

RESULTS : The population expected value of perfect information ranged from $59 683 (Miami) to $54 108 679 (Los Angeles). The rank ordering of expected value of partial perfect information on key groups of parameters were largely consistent across cities and highest for parameters pertaining to HIV risk behaviors, probability of HIV transmission, health service engagement, HIV-related mortality, health utility weights, and healthcare costs. Los Angeles was an exception, where parameters on retention in pre-exposure prophylaxis ranked highest in contributing to decision uncertainty.

CONCLUSIONS : Funding additional data collection on HIV/AIDS may be warranted in Baltimore, Los Angeles, and New York City. Value of information analysis should be embedded into decision-making processes on funding future research and public health intervention.

Zang Xiao, Jalal Hawre, Krebs Emanuel, Pandya Ankur, Zhou Haoxuan, Enns Benjamin, Nosyk Bohdan

2020-Dec

HIV combination implementation strategies, decision uncertainty, expected value of partial perfect information, expected value of perfect information, metamodel

General General

Depression Status Estimation by Deep Learning based Hybrid Multi-Modal Fusion Model

ArXiv Preprint

Preliminary detection of mild depression could immensely help in effective treatment of the common mental health disorder. Due to the lack of proper awareness and the ample mix of stigmas and misconceptions present within the society, mental health status estimation has become a truly difficult task. Due to the immense variations in character level traits from person to person, traditional deep learning methods fail to generalize in a real world setting. In our study we aim to create a human allied AI workflow which could efficiently adapt to specific users and effectively perform in real world scenarios. We propose a Hybrid deep learning approach that combines the essence of one shot learning, classical supervised deep learning methods and human allied interactions for adaptation. In order to capture maximum information and make efficient diagnosis video, audio, and text modalities are utilized. Our Hybrid Fusion model achieved a high accuracy of 96.3% on the Dataset; and attained an AUC of 0.9682 which proves its robustness in discriminating classes in complex real-world scenarios making sure that no cases of mild depression are missed during diagnosis. The proposed method is deployed in a cloud-based smartphone application for robust testing. With user-specific adaptations and state of the art methodologies, we present a state-of-the-art model with user friendly experience.

Hrithwik Shalu, Harikrishnan P, Hari Sankar CN, Akash Das, Saptarshi Majumder, Arnhav Datar, Subin Mathew MS, Anugyan Das, Juned Kadiwala

2020-11-30

General General

Statistical stopping criteria for automated screening in systematic reviews.

In Systematic reviews

Active learning for systematic review screening promises to reduce the human effort required to identify relevant documents for a systematic review. Machines and humans work together, with humans providing training data, and the machine optimising the documents that the humans screen. This enables the identification of all relevant documents after viewing only a fraction of the total documents. However, current approaches lack robust stopping criteria, so that reviewers do not know when they have seen all or a certain proportion of relevant documents. This means that such systems are hard to implement in live reviews. This paper introduces a workflow with flexible statistical stopping criteria, which offer real work reductions on the basis of rejecting a hypothesis of having missed a given recall target with a given level of confidence. The stopping criteria are shown on test datasets to achieve a reliable level of recall, while still providing work reductions of on average 17%. Other methods proposed previously are shown to provide inconsistent recall and work reductions across datasets.

Callaghan Max W, Müller-Hansen Finn

2020-Nov-28

Active learning, Machine learning, Stopping criteria, Systematic review

Cardiology Cardiology

Derivation with Internal Validation of a Multivariable Predictive Model to Predict COVID-19 Test Results in Emergency Department Patients.

In Academic emergency medicine : official journal of the Society for Academic Emergency Medicine

OBJECTIVES : The COVID-19 pandemic has placed acute care providers in demanding situations in predicting disease given the clinical variability, desire to cohort patients, and high variance in testing availability. An approach to stratify patients by likelihood of disease based on rapidly available emergency department (ED) clinical data would offer significant operational and clinical value. The purpose of this study was to develop and internally validate a predictive model to aid in the discrimination of patients undergoing investigation for COVID-19.

METHODS : All patients greater than 18 years presenting to a single academic ED who were tested for COVID-19 during this index ED evaluation were included. Outcome was defined as the result of COVID-19 PCR testing during the index visit or any positive result within the following 7 days. Variables included chest radiograph interpretation, disease specific screening questions, and laboratory data. Three models were developed with a split-sample approach to predict outcome of the PCR test utilizing logistic regression, random forest, and gradient boosted decision-tree methods. Model discrimination was evaluated comparing AUC and point statistics at a predefined threshold.

RESULTS : 1026 patients were included in the study collected between March and April 2020. Overall, there was disease prevalence of 9.6% in the population under study during this time frame. The logistic regression model was found to have an AUC of 0.89 (95% CI 0.84 - 0.94) when including four features: exposure history, temperature, WBC, and chest radiograph result. Random forest method resulted in AUC of 0.86 (95% CI 0.79 - 0.92) and gradient boosting had an AUC of 0.85 (95% CI 0.79-0.91). With a consistently held negative predictive value, the logistic regression model had a positive predictive value of 0.29 (0.2-0.39) compared to 0.2 (0.14-0.28) for random forest and 0.22 (0.15 - 0.3) for the gradient boosted method.

CONCLUSION : The derived predictive models offer good discriminating capacity for COVID-19 disease and provide interpretable and usable methods for those providers caring for these patients at the important crossroads of the community and the health system. We found utilization of the logistic regression model utilizing exposure history, temperature, WBC, and Chest XR result had the greatest discriminatory capacity with the most interpretable model. Integrating a predictive model-based approach to COVID-19 testing decisions and patient care pathways and locations could add efficiency and accuracy to decrease uncertainty.

McDonald Samuel A, Medford Richard J, Basit Mujeeb A, Diercks Deborah B, Courtney D Mark

2020-Nov-28

COVID-19, Clinical Prediction Models, Informatics, Machine Learning

General General

A Tiny CNN Architecture for Medical Face Mask Detection for Resource-Constrained Endpoints

ArXiv Preprint

The world is going through one of the most dangerous pandemics of all time with the rapid spread of the novel coronavirus (COVID-19). According to the World Health Organisation, the most effective way to thwart the transmission of coronavirus is to wear medical face masks. Monitoring the use of face masks in public places has been a challenge because manual monitoring could be unsafe. This paper proposes an architecture for detecting medical face masks for deployment on resource-constrained endpoints having extremely low memory footprints. A small development board with an ARM Cortex-M7 microcontroller clocked at 480 Mhz and having just 496 KB of framebuffer RAM, has been used for the deployment of the model. Using the TensorFlow Lite framework, the model is quantized to further reduce its size. The proposed model is 138 KB post quantization and runs at the inference speed of 30 FPS.

Puranjay Mohan, Aditya Jyoti Paul, Abhay Chirania

2020-11-30

General General

A Tiny CNN Architecture for Medical Face Mask Detection for Resource-Constrained Endpoints

ArXiv Preprint

The world is going through one of the most dangerous pandemics of all time with the rapid spread of the novel coronavirus (COVID-19). According to the World Health Organisation, the most effective way to thwart the transmission of coronavirus is to wear medical face masks. Monitoring the use of face masks in public places has been a challenge because manual monitoring could be unsafe. This paper proposes an architecture for detecting medical face masks for deployment on resource-constrained endpoints having extremely low memory footprints. A small development board with an ARM Cortex-M7 microcontroller clocked at 480 Mhz and having just 496 KB of framebuffer RAM, has been used for the deployment of the model. Using the TensorFlow Lite framework, the model is quantized to further reduce its size. The proposed model is 138 KB post quantization and runs at the inference speed of 30 FPS.

Puranjay Mohan, Aditya Jyoti Paul, Abhay Chirania

2020-11-30

General General

Machine learning models predict coagulopathy in spontaneous intracerebral hemorrhage patients in ER.

In CNS neuroscience & therapeutics

AIMS : Coagulation abnormality is one of the primary concerns for patients with spontaneous intracerebral hemorrhage admitted to ER. Conventional laboratory indicators require hours for coagulopathy diagnosis, which brings difficulties for appropriate intervention within the optimal window. This study evaluates the possibility of building efficient coagulopathy prediction models using data mining and machine learning algorithms.

METHODS : A retrospective cohort enrolled 1668 cases with acute spontaneous intracerebral hemorrhage from three medical centers, excluding those under antithrombotic therapies. Coagulopathy-related clinical parameters were initially screened by univariate analysis. Two machine learning algorithms, the random forest and the support vector machine, were deployed via an approach of four-fold cross-validation to screen out the most important parameters contributing to the occurrence of coagulopathy. Model discrimination was assessed using metrics, including accuracy, precision, recall, and F1 score.

RESULTS : Albumin/globulin ratio, neutrophil count, lymphocyte percentage, aspartate transaminase, alanine transaminase, hemoglobin, platelet count, white blood cell count, neutrophil percentage, systolic and diastolic pressure were identified as major predictors to the occurrence of acute coagulopathy. Compared to support vector machine, the model based on the random forest algorithm showed better accuracy (93.1%, 95% confidence interval [CI]: 0.913-0.950), precision (92.4%, 95% CI: 0.897-0.951), F1 score (91.5%, 95% CI: 0.889-0.964), and recall score (93.6%, 95% CI: 0.909-0.964), and yielded higher area under the receiver operating characteristic curve (AU-ROC) (0.962, 95% CI: 0.942-0.982).

CONCLUSION : The constructed models exhibit good prediction accuracy and efficiency. It might be used in clinical practice to facilitate target intervention for acute coagulopathy in patients with spontaneous intracerebral hemorrhage.

Zhu Fengping, Pan Zhiguang, Tang Ying, Fu Pengfei, Cheng Sijie, Hou Wenzhong, Zhang Qi, Huang Hong, Sun Yirui

2020-Nov-28

coagulopathy, intracranial hemorrhage, machine learning, random forest, support vector machine

General General

Predicting amphibian intraspecific diversity with machine learning: Challenges and prospects for integrating traits, geography, and genetic data.

In Molecular ecology resources

The growing availability of genetic datasets, in combination with machine learning frameworks, offer great potential to answer long-standing questions in ecology and evolution. One such question has intrigued population geneticists, biogeographers, and conservation biologists: What factors determine intraspecific genetic diversity? This question is challenging to answer because many factors may influence genetic variation, including life history traits, historical influences, and geography, and the relative importance of these factors varies across taxonomic and geographic scales. Furthermore, interpreting the influence of numerous, potentially correlated variables is difficult with traditional statistical approaches. To address these challenges, we analyzed repurposed data using machine learning and investigated predictors of genetic diversity, focusing on Nearctic amphibians as a case study. We aggregated species traits, range characteristics, and >42,000 genetic sequences for 299 species using open-access scripts and various databases. After identifying important predictors of nucleotide diversity with random forest regression, we conducted follow-up analyses to examine the roles of phylogenetic history, geography, and demographic processes on intraspecific diversity. Although life history traits were not important predictors for this dataset, we found significant phylogenetic signal in genetic diversity within amphibians. We also found that salamander species at northern latitudes contain lower genetic diversity. Data repurposing and machine learning provide valuable tools for detecting patterns with relevance for conservation, but concerted efforts are needed to compile meaningful datasets with greater utility for understanding global biodiversity.

Barrow Lisa N, da Fonseca Emanuel Masiero, Thompson Coleen E P, Carstens Bryan C

2020-Nov-29

Caudata, data repurposing, latitude, nucleotide diversity, phylogenetic signal, random forests

Cardiology Cardiology

Derivation with Internal Validation of a Multivariable Predictive Model to Predict COVID-19 Test Results in Emergency Department Patients.

In Academic emergency medicine : official journal of the Society for Academic Emergency Medicine

OBJECTIVES : The COVID-19 pandemic has placed acute care providers in demanding situations in predicting disease given the clinical variability, desire to cohort patients, and high variance in testing availability. An approach to stratify patients by likelihood of disease based on rapidly available emergency department (ED) clinical data would offer significant operational and clinical value. The purpose of this study was to develop and internally validate a predictive model to aid in the discrimination of patients undergoing investigation for COVID-19.

METHODS : All patients greater than 18 years presenting to a single academic ED who were tested for COVID-19 during this index ED evaluation were included. Outcome was defined as the result of COVID-19 PCR testing during the index visit or any positive result within the following 7 days. Variables included chest radiograph interpretation, disease specific screening questions, and laboratory data. Three models were developed with a split-sample approach to predict outcome of the PCR test utilizing logistic regression, random forest, and gradient boosted decision-tree methods. Model discrimination was evaluated comparing AUC and point statistics at a predefined threshold.

RESULTS : 1026 patients were included in the study collected between March and April 2020. Overall, there was disease prevalence of 9.6% in the population under study during this time frame. The logistic regression model was found to have an AUC of 0.89 (95% CI 0.84 - 0.94) when including four features: exposure history, temperature, WBC, and chest radiograph result. Random forest method resulted in AUC of 0.86 (95% CI 0.79 - 0.92) and gradient boosting had an AUC of 0.85 (95% CI 0.79-0.91). With a consistently held negative predictive value, the logistic regression model had a positive predictive value of 0.29 (0.2-0.39) compared to 0.2 (0.14-0.28) for random forest and 0.22 (0.15 - 0.3) for the gradient boosted method.

CONCLUSION : The derived predictive models offer good discriminating capacity for COVID-19 disease and provide interpretable and usable methods for those providers caring for these patients at the important crossroads of the community and the health system. We found utilization of the logistic regression model utilizing exposure history, temperature, WBC, and Chest XR result had the greatest discriminatory capacity with the most interpretable model. Integrating a predictive model-based approach to COVID-19 testing decisions and patient care pathways and locations could add efficiency and accuracy to decrease uncertainty.

McDonald Samuel A, Medford Richard J, Basit Mujeeb A, Diercks Deborah B, Courtney D Mark

2020-Nov-28

COVID-19, Clinical Prediction Models, Informatics, Machine Learning

Surgery Surgery

Potential efficacy of dendritic cell immunomodulation in the treatment of osteoarthritis.

In Rheumatology (Oxford, England)

Dendritic cells (DCs) are a cluster of heterogeneous antigen-presenting cells that play a pivotal role in both innate and adaptive immune responses. Rare reports have discussed their role in OA immunopathogenesis. Recently, DCs derived from the synovial fluid of OA mice were shown to have increased expression of toll-like receptors. Moreover, from in vitro studies it was concluded that DCs derived from OA patients had secreted high levels of inflammatory cytokines. Likewise, a significant increase in CD123+BDCA-2 plasmacytoid DCs has been observed in the synovial fluid of OA patients. Furthermore, DCs have a peripheral tolerance potential and can become regulatory under specific circumstances. This could be exploited as a promising tool to eliminate immunoinflammatory manifestations in OA disease. In this review, the potential roles DCs could play in OA pathogenesis have been described. In addition, suggestions for the development of new immunotherapeutic strategies involving intra-articular injections of tolerogenic plasmacytoid DCs for treating OA inflammations have been made.

Alahdal Murad, Zhang Hui, Huang Rongxiang, Sun Wei, Deng Zhiqin, Duan Li, Ouyang Hongwei, Wang Daping

2020-Nov-29

OA, cartilage repair, dendritic cells, immunomodulation

General General

A deep community based approach for large scale content based X-ray image retrieval.

In Medical image analysis

A computer assisted system for automatic retrieval of medical images with similar image contents can serve as an efficient management tool for handling and mining large scale data, and can also be used as a tool in clinical decision support systems. In this paper, we propose a deep community based automated medical image retrieval framework for extracting similar images from a large scale X-ray database. The framework integrates a deep learning-based image feature generation approach and a network community detection technique to extract similar images. When compared with the state-of-the-art medical image retrieval techniques, the proposed approach demonstrated improved performance. We evaluated the performance of the proposed method on two large scale chest X-ray datasets, where given a query image, the proposed approach was able to extract images with similar disease labels with a precision of 85%. To the best of our knowledge, this is the first deep community based image retrieval application on large scale chest X-ray database.

Haq Nandinee Fariah, Moradi Mehdi, Wang Z Jane

2020-Oct-17

Community detection, Content based image retrieval (CBIR), Deep learning, Graph

General General

Breast cancer, screening and diagnostic tools: All you need to know.

In Critical reviews in oncology/hematology

Breast cancer is one of the most frequent malignancies among women worldwide. Methods for screening and diagnosis allow health care professionals to provide personalized treatments that improve the outcome and survival. Scientists and physicians are working side-by-side to develop evidence-based guidelines and equipment to detect cancer earlier. However, the lack of comprehensive interdisciplinary information and understanding between biomedical, medical, and technology professionals makes innovation of new screening and diagnosis tools difficult. This critical review gathers, for the first time, information concerning normal breast and cancer biology, established and emerging methods for screening and diagnosis, staging and grading, molecular and genetic biomarkers. Our purpose is to address key interdisciplinary information about these methods for physicians and scientists. Only the multidisciplinary interaction and communication between scientists, health care professionals, technical experts and patients will lead to the development of better detection tools and methods for an improved screening and early diagnosis.

Barba Diego, León-Sosa Ariana, Lugo Paulina, Suquillo Daniela, Torres Fernando, Surre Frederic, Trojman Lionel, Caicedo Andrés

2020-Nov-11

Artificial intelligence, Biopsy, Breast cancer, Diagnosis, Genetic profiling, Mammography, Screening, Tools

General General

Evaluation of machine learning algorithms to predict the hydrodynamic radii and transition temperatures of chemo-biologically synthesized copolymers.

In Computers in biology and medicine

Elastin-like polypeptides (ELP) belong to a family of recombinant polymers that shows great promise as biocompatible drug delivery and tissue engineering materials. ELPs aggregate above a characteristic transition temperature (Tt). We have previously shown that the Tt and size of the resulting aggregates can be controlled by changing the ELP's solution environment (polymer concentration, salt concentration, and pH). When coupled to a synthetic polyelectrolyte, polyethyleneimine (PEI), ELP retains its Tt behavior and gains the ability to be crosslinked into defined particle sizes. This paper explores several machine learning models to predict the Tt and hydrodynamic radius (Rh) of ELP and two ELP-PEI polymers in varying solution conditions. An exhaustive design of experiments matrix consisting of 81 conditions of interest with varying salt concentration (0, 0.2, 1 M NaCl), pH (3, 7, 10), polymer concentration (0.1, 0.17, 0.3 mg/mL), and polymer type (ELP, ELP-PEI800, ELP-PEI10K) was investigated. The five models used in this study were multiple linear regression, elastic-net, support vector regression, multi-layer perceptron, and random forest. A multi-layer perceptron model was found to have the highest accuracy, with an R2 score of 0.97 for both Rh and Tt. This was followed closely by the random forest model, with an R2 of 0.94 for Rh and 0.95 for Tt. Feature importance was determined using the random forest and linear regression models. Both models showed that salt concentration and polymer type were the two most influential factors that determined Rh, while salt concentration was the dominant factor for Tt.

Cobb Jared S, Seale Maria A, Janorkar Amol V

2020-Nov-21

Elastin-like polypeptide, Feature selection, Hydrodynamic radius, Machine learning, Model, Transition temperature

General General

Using machine learning-based analytics of daily activities to identify modifiable risk factors for falling in Parkinson's disease.

In Parkinsonism & related disorders ; h5-index 58.0

BACKGROUND : Although risk factors that lead to falling in Parkinson's disease (PD) have been previously studied, the established predictors are mostly non-modifiable. A novel method for fall risk assessment may provide more insight into preventable high-risk activities to reduce future falls.

OBJECTIVES : To explore the prediction of falling in PD patients using a machine learning-based approach.

METHOD : 305 PD patients, with or without a history of falls within the past month, were recruited. Data including clinical demographics, medications, and balance confidence, scaled by the 16-item Activities-Specific Balance Confidence Scale (ABC-16), were entered into the supervised machine learning models using XGBoost to explore the prediction of fallers/recurrent fallers in two separate models.

RESULTS : 99 (32%) patients were fallers and 58 (19%) were recurrent fallers. The accuracy of the model to predict falls was 72% (p = 0.001). The most important factors were item 7 (sweeping the floor), item 5 (reaching on tiptoes), and item 12 (walking in a crowded mall) in the ABC-16 scale, followed by disease stage and duration. When recurrent falls were analysed, the models had higher accuracy (81%, p = 0.02). The strongest predictors of recurrent falls were item 12, 5, and 10 (walking across parking lot), followed by disease stage and current age.

CONCLUSION : Our machine learning-based study demonstrated that predictors of falling combined demographics of PD with environmental factors, including high-risk activities that require cognitive attention and changes in vertical and lateral orientations. This enables physicians to focus on modifiable factors and appropriately implement fall prevention strategies for individual patients.

Panyakaew Pattamon, Pornputtapong Natapol, Bhidayasiri Roongroj

2020-Nov-19

Activities-specific balance confidence scale, Fall prediction, Fear of falling, Machine-learning, “Parkinsons disease”

General General

Significant Symptoms and Non-Symptom-Related Factors for Malaria Diagnosis in Endemic Regions of Indonesia.

In International journal of infectious diseases : IJID : official publication of the International Society for Infectious Diseases

OBJECTIVES : This study aims to identify significant symptoms and non-symptom-related factors for malaria diagnosis in endemic regions of Indonesia.

METHODS : Medical records are collected from patients suffering from malaria and other febrile diseases from public hospitals in endemic regions of Indonesia. Interviews with eight Indonesian medical doctors are conducted. Feature selection and machine learning techniques are used to develop malaria classifiers for identifying significant symptoms and non-symptom-related factors.

RESULTS : Seven significant symptoms (duration of fever, headache, nausea and vomiting, heartburn, severe symptom, dizziness and joint pain) and patients' history of malaria as a non-symptom-related factor contribute most to malaria diagnosis. As a symptom, fever duration is more significant than temperature or fever for distinguishing malaria from other febrile diseases. Shivering, fever and sweating (known to indicate malaria presence in Indonesia) are shown to be less significant than other symptoms in endemic regions.

CONCLUSIONS : Three most suitable malaria classifiers have been developed to identify significant features that can be used to predict malaria as distinct from other febrile diseases. With extensive experiments on the classifiers, the significant features identified can help medical doctors in the clinical diagnosis of malaria and raise public awareness of significant malaria symptoms at early stages.

Bria Yulianti Paula, Yeh Chung-Hsing, Bedingfield Susan

2020-Nov-26

Malaria classifier, Malaria diagnosis, Malaria symptom, Non-symptom-related factor

General General

Recognized trophoblast-like cells conversion from human embryonic stem cells by BMP4 based on convolutional neural network.

In Reproductive toxicology (Elmsford, N.Y.)

The use of models of stem cell differentiation to trophoblastic cells provides an effective perspective for understanding the early molecular events in the establishment and maintenance of human pregnancy. In combination with the newly developed deep learning technology, the automated identification of this process can greatly accelerate the contribution to relevant knowledge. Based on the transfer learning technique, we used a convolutional neural network to distinguish the microscopic images of Embryonic stem cells (ESCs) from differentiated trophoblast -like cells (TBL). To tackle the problem of insufficient training data, the strategies of data augmentation were used. The results showed that the convolutional neural network could successfully recognize trophoblast cells and stem cells automatically, but could not distinguish TBL from the immortalized trophoblast cell lines in vitro (JEG-3 and HTR8-SVneo). We compare the recognition effect of the commonly used convolutional neural network, including DenseNet, VGG16, VGG19, InceptionV3, and Xception. This study extends the deep learning technique to trophoblast cell phenotype classification and paves the way for automatic bright-field microscopic image analysis of trophoblast cells in the future.

Liu Yajun, Yi Zhang, Cui Jinquan

2020-Nov-26

convolutional neural network, microscopy image processing, trophoblast cell

General General

Conditional Generative Adversarial Networks (cGANs) aided motion correction of dynamic 18F-FDG PET brain studies.

In Journal of nuclear medicine : official publication, Society of Nuclear Medicine

This work set out to develop a motion correction approach aided by conditional generative adversarial network (cGAN) methodology that allows reliable, data-driven determination of involuntary subject motion during dynamic 18F-FDG brain studies. Methods: Ten healthy volunteers (5M/5F, 27 ± 7 years, 70 ± 10 kg) underwent a test-retest 18F-FDG PET/MRI examination of the brain (N = 20). The imaging protocol consisted of a 60-min PET list-mode acquisition contemporaneously acquired with MRI, including MR navigators and a 3D time-of-flight MR-angiography sequence. Arterial blood samples were collected as a reference standard representing the arterial input function (AIF). Training of the cGAN was performed using 70% of the total data sets (N = 16, randomly chosen), which was corrected for motion using MR navigators. The resulting cGAN mappings (between individual frames and the reference frame (55-60min p.i.)) were then applied to the test data set (remaining 30%, N = 6), producing artificially generated low-noise images from early high-noise PET frames. These low-noise images were then co-registered to the reference frame, yielding 3D motion vectors. Performance of cGAN-aided motion correction was assessed by comparing the image-derived input function (IDIF) extracted from a cGAN-aided motion corrected dynamic sequence against the AIF based on the areas-under-the-curves (AUCs). Moreover, clinical relevance was assessed through direct comparison of the average cerebral metabolic rates of glucose (CMRGlc) values in grey matter (GM) calculated using the AIF and the IDIF. Results: The absolute percentage-difference between AUCs derived using the motion-corrected IDIF and the AIF was (1.2 ± 0.9) %. The GM CMRGlc values determined using these two input functions differed by less than 5% ((2.4 ± 1.7) %). Conclusion: A fully-automated data-driven motion compensation approach was established and tested for 18F-FDG PET brain imaging. cGAN-aided motion correction enables the translation of non-invasive clinical absolute quantification from PET/MR to PET/CT by allowing the accurate determination of motion vectors from the PET data itself.

Shiyam Sundar Lalith Kumar, Iommi David, Muzik Otto, Chalampalakis Zacharias, Klebermass Eva-Maria, Hienert Marius, Rischka Lucas, Lanzenberger Rupert, Hahn Andreas, Pataraia Ekaterina, Traub-Weidinger Tatjana, Beyer Thomas

2020-Nov-27

Deep learning, Head-motion correction, Image Processing, Neurology, PET, Patlak analysis, Research Methods, [18F]FDG brain, absolute quantification

General General

Robust inference of positive selection on regulatory sequences in the human brain.

In Science advances

A longstanding hypothesis is that divergence between humans and chimpanzees might have been driven more by regulatory level adaptations than by protein sequence adaptations. This has especially been suggested for regulatory adaptations in the evolution of the human brain. We present a new method to detect positive selection on transcription factor binding sites on the basis of measuring predicted affinity change with a machine learning model of binding. Unlike other methods, this approach requires neither defining a priori neutral sites nor detecting accelerated evolution, thus removing major sources of bias. We scanned the signals of positive selection for CTCF binding sites in 29 human and 11 mouse tissues or cell types. We found that human brain-related cell types have the highest proportion of positive selection. This result is consistent with the view that adaptive evolution to gene regulation has played an important role in evolution of the human brain.

Liu Jialin, Robinson-Rechavi Marc

2020-Nov

General General

Assessment and statistical modelling of airborne microorganisms in Madrid.

In Environmental pollution (Barking, Essex : 1987)

The limited evidence available suggests that the interaction between chemical pollutants and biological particles may intensify respiratory diseases caused by air pollution in urban areas. Unlike air pollutants, which are routinely measured, records of biotic component are scarce. While pollen concentrations are daily surveyed in most cities, data related to airborne bacteria or fungi are not usually available. This work presents the first effort to understand atmospheric pollution integrating both biotic and abiotic agents, trying to identify relationships among the Proteobacteria, Actinobacteria and Ascomycota phyla with palynological, meteorological and air quality variables using all biological historical records available in the Madrid Greater Region. The tools employed involve statistical hypothesis contrast tests such as Kruskal-Wallis and machine learning algorithms. A cluster analysis was performed to analyse which abiotic variables were able to separate the biotic variables into groups. Significant relationships were found for temperature and relative humidity. In addition, the relative abundance of the biological phyla studied was affected by PM10 and O3 ambient concentration. Preliminary Generalized Additive Models (GAMs) to predict the biotic relative abundances based on these atmospheric variables were developed. The results (r = 0.70) were acceptable taking into account the scarcity of the available data. These models can be used as an indication of the biotic composition when no measurements are available. They are also a good starting point to continue working in the development of more accurate models and to investigate causal relationships.

Cordero José María, Núñez Andrés, García Ana M, Borge Rafael

2020-Nov-21

Bacteria, Biotic and abiotic air pollutants interactions, Fungi, GAMs, Pollen, Statistical modelling

General General

Smart solutions for smart cities: Urban wetland mapping using very-high resolution satellite imagery and airborne LiDAR data in the City of St. John's, NL, Canada.

In Journal of environmental management

Thanks to increasing urban development, it has become important for municipalities to understand how ecological processes function. In particular, urban wetlands are vital habitats for the people and the animals living amongst them. This is because wetlands provide great services, including water filtration, flood and drought mitigation, and recreational spaces. As such, several recent urban development plans are currently needed to monitor these invaluable ecosystems using time- and cost-efficient approaches. Accordingly, this study is designed to provide an initial response to the need of wetland mapping in the City of St. John's, Newfoundland and Labrador (NL), Canada. Specifically, we produce the first high-resolution wetland map of the City of St. John's using advanced machine learning algorithms, very high-resolution satellite imagery, and airborne LiDAR. An object-based random forest algorithm is applied to features extracted from WorldView-4, GeoEye-1, and LiDAR data to characterize five wetland classes, namely bog, fen, marsh, swamp, and open water, within an urban area. An overall accuracy of 91.12% is obtained for discriminating different wetland types and wetland surface water flow connectivity is also produced using LiDAR data. The resulting wetland classification map and the water surface flow map can help elucidate a greater understanding of the way in which wetlands are connected to the city's landscape and ultimately aid to improve wetland-related conservation and management decisions within the City of St. John's.

Mahdianpari Masoud, Granger Jean Elizabeth, Mohammadimanesh Fariba, Warren Sherry, Puestow Thomas, Salehi Bahram, Brisco Brian

2020-Nov-24

City, Image classification, LiDAR, Object-based, Random forest, Remote sensing, VHR imagery, Wetland

General General

Identifying the vegetation type in Google Earth images using a convolutional neural network: a case study for Japanese bamboo forests.

In BMC ecology

BACKGROUND : Classifying and mapping vegetation are crucial tasks in environmental science and natural resource management. However, these tasks are difficult because conventional methods such as field surveys are highly labor-intensive. Identification of target objects from visual data using computer techniques is one of the most promising techniques to reduce the costs and labor for vegetation mapping. Although deep learning and convolutional neural networks (CNNs) have become a new solution for image recognition and classification recently, in general, detection of ambiguous objects such as vegetation is still difficult. In this study, we investigated the effectiveness of adopting the chopped picture method, a recently described protocol for CNNs, and evaluated the efficiency of CNN for plant community detection from Google Earth images.

RESULTS : We selected bamboo forests as the target and obtained Google Earth images from three regions in Japan. By applying CNN, the best trained model correctly detected over 90% of the targets. Our results showed that the identification accuracy of CNN is higher than that of conventional machine learning methods.

CONCLUSIONS : Our results demonstrated that CNN and the chopped picture method are potentially powerful tools for high-accuracy automated detection and mapping of vegetation.

Watanabe Shuntaro, Sumi Kazuaki, Ise Takeshi

2020-Nov-27

Convolutional neural network, Google earth imagery, Vegetation mapping

Radiology Radiology

Can computed tomography-based radiomics potentially discriminate between anterior mediastinal cysts and type B1 and B2 thymomas?

In Biomedical engineering online

BACKGROUND : Anterior mediastinal cysts (AMC) are often misdiagnosed as thymomas and undergo surgical resection, which caused unnecessary treatment and medical resource waste. The purpose of this study is to explore potential possibility of computed tomography (CT)-based radiomics for the diagnosis of AMC and type B1 and B2 thymomas.

METHODS : A group of 188 patients with pathologically confirmed AMC (106 cases misdiagnosed as thymomas in CT) and thymomas (82 cases) and underwent routine chest CT from January 2010 to December 2018 were retrospectively analyzed. The lesions were manually delineated using ITK-SNAP software, and radiomics features were performed using the artificial intelligence kit (AK) software. A total of 180 tumour texture features were extracted from enhanced CT and unenhanced CT, respectively. The general test, correlation analysis, and LASSO were used to features selection and then the radiomics signature (radscore) was obtained. The combined model including radscore and independent clinical factors was developed. The model performances were evaluated on discrimination, calibration curve.

RESULTS : Two radscore models were constructed from the unenhanced and enhanced phases based on the selected four and three features, respectively. The AUC, sensitivity, and specificity of the enhanced radscore model were 0.928, 89.3%, and 83.8% in the training dataset and 0.899, 84.6%, and 87.5% in the test dataset (higher than the unenhanced radscore model). The combined model of enhanced CT including radiomics features and independent clinical factors yielded an AUC, sensitivity and specificity of 0.941, 82.1%, and 94.6% in the training dataset and 0.938, 92.3%, and 87.5% in the test dataset (higher than the unenhanced combined model and enhanced radscore model).

CONCLUSIONS : The study suggested the possibility that the combined model in enhanced CT provided a potential tool to facilitate the differential diagnosis of AMC and type B1 and B2 thymomas.

Liu Lulu, Lu Fangxiao, Pang Peipei, Shao Guoliang

2020-Nov-27

Anterior mediastinal cysts, Enhanced CT, Radiomics, Thymomas

General General

Machine learning algorithm for early detection of end-stage renal disease.

In BMC nephrology ; h5-index 39.0

BACKGROUND : End stage renal disease (ESRD) describes the most severe stage of chronic kidney disease (CKD), when patients need dialysis or renal transplant. There is often a delay in recognizing, diagnosing, and treating the various etiologies of CKD. The objective of the present study was to employ machine learning algorithms to develop a prediction model for progression to ESRD based on a large-scale multidimensional database.

METHODS : This study analyzed 10,000,000 medical insurance claims from 550,000 patient records using a commercial health insurance database. Inclusion criteria were patients over the age of 18 diagnosed with CKD Stages 1-4. We compiled 240 predictor candidates, divided into six feature groups: demographics, chronic conditions, diagnosis and procedure features, medication features, medical costs, and episode counts. We used a feature embedding method based on implementation of the Word2Vec algorithm to further capture temporal information for the three main components of the data: diagnosis, procedures, and medications. For the analysis, we used the gradient boosting tree algorithm (XGBoost implementation).

RESULTS : The C-statistic for the model was 0.93 [(0.916-0.943) 95% confidence interval], with a sensitivity of 0.715 and specificity of 0.958. Positive Predictive Value (PPV) was 0.517, and Negative Predictive Value (NPV) was 0.981. For the top 1 percentile of patients identified by our model, the PPV was 1.0. In addition, for the top 5 percentile of patients identified by our model, the PPV was 0.71. All the results above were tested on the test data only, and the threshold used to obtain these results was 0.1. Notable features contributing to the model were chronic heart and ischemic heart disease as a comorbidity, patient age, and number of hypertensive crisis events.

CONCLUSIONS : When a patient is approaching the threshold of ESRD risk, a warning message can be sent electronically to the physician, who will initiate a referral for a nephrology consultation to ensure an investigation to hasten the establishment of a diagnosis and initiate management and therapy when appropriate.

Segal Zvi, Kalifa Dan, Radinsky Kira, Ehrenberg Bar, Elad Guy, Maor Gal, Lewis Maor, Tibi Muhammad, Korn Liat, Koren Gideon

2020-Nov-27

Algorithm, End stage renal disease, Machine learning, Prediction model

Surgery Surgery

Risk factors and socio-economic burden in pancreatic ductal adenocarcinoma operation: a machine learning based analysis.

In BMC cancer

BACKGROUND : Surgical resection is the major way to cure pancreatic ductal adenocarcinoma (PDAC). However, this operation is complex, and the peri-operative risk is high, making patients more likely to be admitted to the intensive care unit (ICU). Therefore, establishing a risk model that predicts admission to ICU is meaningful in preventing patients from post-operation deterioration and potentially reducing socio-economic burden.

METHODS : We retrospectively collected 120 clinical features from 1242 PDAC patients, including demographic data, pre-operative and intra-operative blood tests, in-hospital duration, and ICU status. Machine learning pipelines, including Supporting Vector Machine (SVM), Logistic Regression, and Lasso Regression, were employed to choose an optimal model in predicting ICU admission. Ordinary least-squares regression (OLS) and Lasso Regression were adopted in the correlation analysis of post-operative bleeding, total in-hospital duration, and discharge costs.

RESULTS : SVM model achieved higher performance than the other two models, resulted in an AU-ROC of 0.80. The features, such as age, duration of operation, monocyte count, and intra-operative partial arterial pressure of oxygen (PaO2), are risk factors in the ICU admission. The protective factors include RBC count, analgesic pump dexmedetomidine (DEX), and intra-operative maintenance of DEX. Basophil percentage, duration of the operation, and total infusion volume were risk variables for staying in ICU. The bilirubin, CA125, and pre-operative albumin were associated with the post-operative bleeding volume. The operation duration was the most important factor for discharge costs, while pre-lymphocyte percentage and the absolute count are responsible for less cost.

CONCLUSIONS : We observed that several new indicators such as DEX, monocyte count, basophil percentage, and intra-operative PaO2 showed a good predictive effect on the possibility of admission to ICU and duration of stay in ICU. This work provided an essential reference for indication in advance to PDAC operation.

Zhang Yijue, Zhu Sibo, Yuan Zhiqing, Li Qiwei, Ding Ruifeng, Bao Xunxia, Zhen Timing, Fu Zhiliang, Fu Hailong, Xing Kaichen, Yuan Hongbin, Chen Tao

2020-Nov-27

Intensive care unit, Machine learning, Pancreatic adenocarcinoma, Peri-operative, Risk prediction, Socio-economic burden

Surgery Surgery

Implementing biological markers as a tool to guide clinical care of patients with pancreatic cancer.

In Translational oncology

A major obstacle for the effective treatment of pancreatic ductal adenocarcinoma (PDAC) is its molecular heterogeneity, reflected by the diverse clinical outcomes and responses to therapies that occur. The tumors of patients with PDAC must therefore be closely examined and classified before treatment initiation in order to predict the natural evolution of the disease and the response to therapy. To stratify patients, it is absolutely necessary to identify biological markers that are highly specific and reproducible, and easily measurable by inexpensive sensitive techniques. Several promising strategies to find biomarkers are already available or under development, such as the use of liquid biopsies to detect circulating tumor cells, circulating free DNA, methylated DNA, circulating RNA, and exosomes and extracellular vesicles, as well as immunological markers and molecular markers. Such biomarkers are capable of classifying patients with PDAC and predicting their therapeutic sensitivity. Interestingly, developing chemograms using primary cell lines or organoids and analyzing the resulting high-throughput data via artificial intelligence would be highly beneficial to patients. How can exploiting these biomarkers benefit patients with resectable, borderline resectable, locally advanced, and metastatic PDAC? In fact, the utility of these biomarkers depends on the patient's clinical situation. At the early stages of the disease, the clinician's priority lies in rapid diagnosis, so that the patient receives surgery without delay; at advanced disease stages, where therapeutic possibilities are severely limited, the priority is to determine the PDAC tumor subtype so as to estimate the clinical outcome and select a suitable effective treatment.

Iovanna Juan

2020-Nov-25

Biomarkers, Immunotherapy, Pancreatic cancer, Patients stratification, Personalized medicine

General General

Machine learning glass transition temperature of styrenic random copolymers.

In Journal of molecular graphics & modelling

For styrenic random copolymers, the glass transition temperature, Tg, is an important thermophysical parameter, which is sometimes difficult to measure and determine by experiments. Approaches based on data-driven modeling provide alternative methods to predict Tg in a fast and robust way. The Gaussian process regression (GPR) model is investigated to present the statistical relationship between important quantum chemical descriptors and glass transition temperature for styrenic random copolymers. 48 samples with Tg that have been measured experimentally are explored, which range from 246 K to 426 K. The modeling approach demonstrates high accuracy and stability, and provides a novel and promising tool for efficient and low-cost estimations of copolymer Tg values.

Zhang Yun, Xu Xiaojie

2020-Nov-10

Copolymer, Glass transition temperature, Machine learning, Styrene

General General

Development of an ensemble of machine learning algorithms to model aerobic granular sludge reactors.

In Water research

Machine learning models provide an adaptive tool to predict the performance of treatment reactors under varying operational and influent conditions. Aerobic granular sludge (AGS) is still an emerging technology and does not have a long history of full-scale application. There is, therefore, a scarcity of long-term data in this field, which impacted the development of data-driven models. In this study, a machine learning model was developed for simulating the AGS process using 475 days of data collected from three lab-based reactors. Inputs were selected based on RReliefF ranking after multicollinearity reduction. A five-stage model structure was adopted in which each parameter was predicted using separate models for the preceding parameters as inputs. An ensemble of artificial neural networks, support vector regression and adaptive neuro-fuzzy inference systems was used to improve the models' performance. The developed model was able to predict the MLSS, MLVSS, SVI5, SVI30, granule size, and effluent COD, NH4-N, and PO43- with average R2, nRMSE and sMAPE of 95.7%, 0.032 and 3.7% respectively.

Zaghloul Mohamed Sherif, Iorhemen Oliver Terna, Hamza Rania Ahmed, Tay Joo Hwa, Achari Gopal

2020-Nov-19

Adaptive Neuro-Fuzzy Inference Systems, Aerobic granular sludge, Artificial neural networks, Machine Learning, Sequencing Batch Reactors, Support Vector Regression

Radiology Radiology

Automated size-specific dose estimates using deep learning image processing.

In Medical image analysis

An automated vendor-independent system for dose monitoring in computed tomography (CT) medical examinations involving ionizing radiation is presented in this paper. The system provides precise size-specific dose estimates (SSDE) following the American Association of Physicists in Medicine regulations. Our dose management can operate on incomplete DICOM header metadata by retrieving necessary information from the dose report image by using optical character recognition. For the determination of the patient's effective diameter and water equivalent diameter, a convolutional neural network is employed for the semantic segmentation of the body area in axial CT slices. Validation experiments for the assessment of the SSDE determination and subsequent stages of our methodology involved a total of 335 CT series (60 352 images) from both public databases and our clinical data. We obtained the mean body area segmentation accuracy of 0.9955 and Jaccard index of 0.9752, yielding a slice-wise mean absolute error of effective diameter below 2 mm and water equivalent diameter at 1 mm, both below 1%. Three modes of the SSDE determination approach were investigated and compared to the results provided by the commercial system GE DoseWatch in three different body region categories: head, chest, and abdomen. Statistical analysis was employed to point out some significant remarks, especially in the head category.

Juszczyk Jan, Badura Pawel, Czajkowska Joanna, Wijata Agata, Andrzejewski Jacek, Bozek Pawel, Smolinski Michal, Biesok Marta, Sage Agata, Rudzki Marcin, Wieclawek Wojciech

2020-Nov-12

Artificial intelligence, Computed tomography, Deep learning, Dose management, Medical information systems, Radiation imaging

Surgery Surgery

Attitudes of the Surgical Team Toward Artificial Intelligence in Neurosurgery: an International Two-Stage Cross-sectional Survey.

In World neurosurgery ; h5-index 47.0

BACKGROUND : Artificial Intelligence (AI) has the potential to disrupt how we diagnose and treat patients. Previous work by our group has demonstrated that the majority of patients and their relatives feel comfortable with the application of AI to augment surgical care. The aim of this study was to similarly evaluate the attitudes of surgeons and the wider surgical team towards the role of AI in neurosurgery.

METHODS : In a two-stage cross sectional survey, an initial open-question qualitative survey was created to determine the perspective of the surgical team on AI in neurosurgery, including surgeons, anaesthetists, nurses, and theatre practitioners. Thematic analysis was performed to develop a second stage quantitative survey that was distributed via social media. We assessed the extent to which they agreed and were comfortable with real-world AI implementation using a 5-point Likert scale.

RESULTS : In the first stage survey, 33 participants responded. Six main themes were identified: imaging interpretation and pre-operative diagnosis; co-ordination of the surgical team; operative planning; real-time alert of hazards and complications; autonomous surgery; post-operative management and follow-up. In the second stage, 100 participants responded. Responders somewhat agreed or strongly agreed about AI utilised for imaging interpretation (62%), operative planning (82%), co-ordination of the surgical team (70%), real-time alert of hazards and complications (85%), and autonomous surgery (66%). The role of AI within post-operative management and follow-up was less agreeable (49%).

CONCLUSION : This survey highlights that the majority of surgeons and the wider surgical team both agree and are comfortable with the application of AI within neurosurgery.

Horsfall Hugo Layard, Palmisciano Paolo, Khan Danyal Z, Muirhead William, Koh Chan Hee, Stoyanov Danail, Marcus Hani J

2020-Nov-25

Artificial Intelligence, machine learning, neurosurgery, operative planning, survey

General General

Identifying early gastric cancer under magnifying narrow-band images via deep learning: a multicenter study.

In Gastrointestinal endoscopy ; h5-index 72.0

BACKGROUND AND AIMS : Narrow-band imaging with magnifying endoscopy (ME-NBI) has shown advantages in the diagnosis of early gastric cancer (EGC). However, proficiency in diagnostic algorithms requires substantial expertise and experience. In this study, we aimed to develop a computer-aided diagnostic model, EGCM, to analyze and assist in the diagnosis of EGC under ME-NBI.

METHODS : A total of 1777 ME-NBI images from 295 cases were collected from 3 centers. These cases were randomly divided into a training cohort (TC, n=170), an internal test cohort (ITC, n=73), and an external test cohort (ETC, n=52). EGCM based on VGG-19 with a single fully connected 2-classification layer was developed via fine-tuning and validated on all of the cohorts. Furthermore, we compared the model with 8 endoscopists with varying experience. Primary comparison measures included accuracy (ACC), the area under the receiver operating characteristic curve (AUC), sensitivity (Sn), specificity (Sp), positive predictive value (PPV), and negative predictive value (NPV).

RESULTS : EGCM acquired AUCs of 0.808 in the ITC and 0.813 in the ETC. Moreover, EGCM achieved similar predictive performance to the senior endoscopists (ACC: 0.770 vs 0.755, p=0.355; Sn: 0.792 vs 0.767, p=0.183; Sp: 0.745 vs 0.742, p=0.931), but better than the junior endoscopists (ACC: 0.770 vs 0.728, p<0.05). After referring to the results of EGCM, the average diagnostic ability of the endoscopists was significantly improved in terms of accuracy, sensitivity, PPV, and NPV (p<0.05).

CONCLUSION : EGCM exhibited comparable performance to senior endoscopists in the diagnosis of EGC and showed the potential value in aiding and improving the diagnosis of EGC by endoscopists.

Hu Hao, Gong Lixin, Dong Di, Zhu Liang, Wang Min, He Jie, Shu Lei, Cai Yiling, Cai Shilun, Su Wei, Zhong Yunshi, Li Cong, Zhu Yongbei, Fang Mengjie, Zhong Lianzhen, Yang Xin, Zhou Pinghong, Tian Jie

2020-Nov-25

Deep learning, Early gastric cancer, Magnifying endoscopy, Narrow-band imaging

General General

A Unified Framework for Dopamine Signals across Timescales.

In Cell ; h5-index 250.0

Rapid phasic activity of midbrain dopamine neurons is thought to signal reward prediction errors (RPEs), resembling temporal difference errors used in machine learning. However, recent studies describing slowly increasing dopamine signals have instead proposed that they represent state values and arise independent from somatic spiking activity. Here we developed experimental paradigms using virtual reality that disambiguate RPEs from values. We examined dopamine circuit activity at various stages, including somatic spiking, calcium signals at somata and axons, and striatal dopamine concentrations. Our results demonstrate that ramping dopamine signals are consistent with RPEs rather than value, and this ramping is observed at all stages examined. Ramping dopamine signals can be driven by a dynamic stimulus that indicates a gradual approach to a reward. We provide a unified computational understanding of rapid phasic and slowly ramping dopamine signals: dopamine neurons perform a derivative-like computation over values on a moment-by-moment basis.

Kim HyungGoo R, Malik Athar N, Mikhael John G, Bech Pol, Tsutsui-Kimura Iku, Sun Fangmiao, Zhang Yajun, Li Yulong, Watabe-Uchida Mitsuko, Gershman Samuel J, Uchida Naoshige

2020-Nov-25

General General

Magnetic resonance imaging for chronic pain: diagnosis, manipulation, and biomarkers.

In Science China. Life sciences

Pain is a multidimensional subjective experience with biological, psychological, and social factors. Whereas acute pain can be a warning signal for the body to avoid excessive injury, long-term and ongoing pain may be developed as chronic pain. There are more than 100 million people in China living with chronic pain, which has raised a huge socioeconomic burden. Studying the mechanisms of pain and developing effective analgesia approaches are important for basic and clinical research. Recently, with the development of brain imaging and data analytical approaches, the neural mechanisms of chronic pain have been widely studied. In the first part of this review, we briefly introduced the magnetic resonance imaging and conventional analytical approaches for brain imaging data. Then, we reviewed brain alterations caused by several chronic pain disorders, including localized and widespread primary pain, primary headaches and orofacial pain, musculoskeletal pain, and neuropathic pain, and present meta-analytical results to show brain regions associated with the pathophysiology of chronic pain. Next, we reviewed brain changes induced by pain interventions, such as pharmacotherapy, neuromodulation, and acupuncture. Lastly, we reviewed emerging studies that combined advanced machine learning and neuroimaging techniques to identify diagnostic, prognostic, and predictive biomarkers in chronic pain patients.

Tu Yiheng, Cao Jin, Bi Yanzhi, Hu Li

2020-Nov-23

biomarkers, chronic pain, machine learning, magnetic resonance imaging

General General

Repeated centrifuging and washing concentrates bacterial samples in peritoneal dialysis for optimal culture: an original article.

In BMC microbiology

BACKGROUND : Bacterial cultures allow the identification of infectious disease pathogens. However, obtaining the results of conventional culture methods is time-consuming, taking at least two days. A more efficient alternative is the use of concentrated bacterial samples to accelerate culture growth. Our study focuses on the development of a high-yield sample concentrating technique.

RESULTS : A total of 71 paired samples were obtained from patients on peritoneal dialysis (PD). The peritoneal dialysates were repeat-centrifuged and then washed with saline, namely the centrifuging and washing method (C&W method). The concentrated samples were Gram-stained and inoculated into culture plates. The equivalent unprocessed dialysates were cultured as the reference method. The times until culture results for the two methods were compared. The reference method yielded no positive Gram stain results, but the C&W method immediately gave positive Gram stain results for 28 samples (p < 0.001). The culture-negative rate was lower in the C&W method (5/71) than in the reference method (13/71) (p = 0.044). The average time for bacterial identification achieved with the C&W method (22.0 h) was shorter compared to using the reference method (72.5 h) (p < 0.001).

CONCLUSIONS : The C&W method successfully concentrated bacterial samples and superseded blood culture bottles for developing adequate bacterial cultures. The C&W method may decrease the culture report time, thus improving the treatment of infectious diseases.

Tien Ni, You Bang-Jau, Lin Hsuan-Jen, Chang Chieh-Ying, Chou Che-Yi, Lin Hsiu-Shen, Chang Chiz-Tzung, Wang Charles C N, Chen Hung-Chih

2020-Nov-27

Bacterial culture, Peritoneal dialysis, Peritonitis, Repeat centrifuging and washing

oncology Oncology

Prediction of Cranial Radiotherapy Treatment in Pediatric Acute Lymphoblastic Leukemia Patients Using Machine Learning: A Case Study at MAHAK Hospital.

In Asian Pacific journal of cancer prevention : APJCP

BACKGROUND : Acute Lymphoblastic Leukemia (ALL) is the most common blood disease in children and is responsible for the most deaths amongst children. Due to major improvements in the treatment protocols in the 50-years period, the survivability of this disease has witnessed dramatic rise until this date which is about 90 percent. There are many investigations tending to indicate the efficiency of cranial radiotherapy found out that without that, outcome of the patients did not change and even it improved at some cases.

METHODS : the main aim of this study is predicting cranial radiotherapy treatment in pediatric acute lymphoblastic leukemia patients using machine learning. Scope of this paper is intertwined with predicting the necessity of one of the treatment modalities that has been used for many years for this group of patients named Cranial Radiotherapy (CRT). For this purpose, a case study is considered at Mahak charity hospital. In this paper, our focus is on ALL patients aged 0 to 17 treated at Mahak hospital, one of the best centers for treatment of childhood malignancies in Iran. Dataset analyzed in this study is gathered by the research team from patient's paper-based files. Our dataset consists of 241 observations on patients with 31 attributes after the data cleaning process. Our designed machine learning model for predicting cranial radiotherapy treatment in pediatric acute lymphoblastic leukemia patients is a stacked ensemble classifier of independently strong models with a meta-learner to tune the weights and parameters of the base classifiers.

RESULTS : The stacked ensemble classifier show highly reasonable performance with AUC of 87.52%. Moreover, the attributes are ranked based on their predictive power and the most important variable for CRT necessity prediction is the disease relapse.

CONCLUSION : In order to conclude, derived from previous studies regarding CRT it is not only cost-effective but also more healthy to eradicate the use of CRT for the treatment of childhood ALL. Furthermore, it is valuable to increase the clinical databases by creating more synthetic health databases not only for research purposes but also for physicians to keep track of their patient's status.   <br />.

Kashef Amir Arash, Khatibi Toktam, Mehrvar Azim

2020-Nov-01

Acute lymphoblastic leukemia (ALL), Cranial Radiotherapy, Stacked ensemble, childhood blood cancer, prediction

General General

Dynamic-range compression and contrast enhancement in swept-source optical coherence tomography systems with a frequency gain compensation amplifier.

In Journal of biomedical optics

SIGNIFICANCE : Optical coherence tomography (OCT) has been widely used in clinical studies. However, the image quality of OCT decreases with increasing imaging depth since the light is rapidly attenuated in biological tissues.

AIM : We present a compensation approach to preserve weak high-frequency signals from deep structures and compress the dynamic range of the detected signal for superior analog-to-digital conversion and image display capability.

APPROACH : A homemade frequency gain compensation amplifier is designed and fabricated to amplify the electrical signal from a balanced photodetector and compensate for the signal attenuation in swept-source OCT (SSOCT).

RESULTS : It is demonstrated in imaging various objects that this cost-efficient technique effectively enhances the contrast of the deep tissue image.

CONCLUSIONS : A frequency gain compensation amplifier is designed and used to compress the dynamic range of the electrical signal detected by the photodetector of an SSOCT system, which enables weak signals from deep structures to be acquired by the ADC and displayed with enhanced local contrast.

Liang Shanshan, Li Xinyu, Qin Yao, Zhang Jun

2020-Nov

Swept-source OCT, attenuation compensation, contrast enhancement, dynamic-range compression, frequency gain compensation amplifier

Dermatology Dermatology

Deep learning-level melanoma detection by interpretable machine learning and imaging biomarker cues.

In Journal of biomedical optics

SIGNIFICANCE : Melanoma is a deadly cancer that physicians struggle to diagnose early because they lack the knowledge to differentiate benign from malignant lesions. Deep machine learning approaches to image analysis offer promise but lack the transparency to be widely adopted as stand-alone diagnostics.

AIM : We aimed to create a transparent machine learning technology (i.e., not deep learning) to discriminate melanomas from nevi in dermoscopy images and an interface for sensory cue integration.

APPROACH : Imaging biomarker cues (IBCs) fed ensemble machine learning classifier (Eclass) training while raw images fed deep learning classifier training. We compared the areas under the diagnostic receiver operator curves.

RESULTS : Our interpretable machine learning algorithm outperformed the leading deep-learning approach 75% of the time. The user interface displayed only the diagnostic imaging biomarkers as IBCs.

CONCLUSIONS : From a translational perspective, Eclass is better than convolutional machine learning diagnosis in that physicians can embrace it faster than black box outputs. Imaging biomarkers cues may be used during sensory cue integration in clinical screening. Our method may be applied to other image-based diagnostic analyses, including pathology and radiology.

Gareau Daniel S, Browning James, Correa Da Rosa Joel, Suarez-Farinas Mayte, Lish Samantha, Zong Amanda M, Firester Benjamin, Vrattos Charles, Renert-Yuval Yael, Gamboa Mauricio, Vallone María G, Barragán-Estudillo Zamira F, Tamez-Peña Alejandra L, Montoya Javier, Jesús-Silva Miriam A, Carrera Cristina, Malvehy Josep, Puig Susana, Marghoob Ashfaq, Carucci John A, Krueger James G

2020-Nov

diagnostic application, imaging biomarkers, machine learning, sensory cue integration, skin cancer classification

General General

Artificial intelligence applied for the rapid identification of new antimalarial candidates with dual-stage activity.

In ChemMedChem ; h5-index 40.0

Increasing reports of multi-drug resistant malaria parasites urge for the discovery of new effective drugs, with different chemical scaffolds. Protein kinases play key role in many cellular processes such as signal transduction and cell division, making them interesting targets in many diseases. Protein kinase 7 (PK7) is an orphan kinase from the Plasmodium genus, essential for the sporogonic cycle of these parasites. Here, we applied a robust and integrative artificial intelligence-assisted virtual screening (VS) approach using shape-based and machine learning models aiming to identify new potential PK7 inhibitors with in vitro antiplasmodial activity. Eight virtual hits were experimentally evaluated and compound LabMol-167 inhibited ookinete conversion of P. berghei and blood stages of P. falciparum at nanomolar concentrations with low cytotoxicity in mammalian cells. Since PK7 does not have an essential role in Plasmodium blood stage and our virtual screening strategy aimed for both PK7 and blood stage inhibition, we conducted an in silico target fishing approach and proposed that this compound might also inhibit P. falciparum PK5, acting as a possible dual target inhibitor. Finally, docking studies of LabMol-167 with P. falciparum PK7 and PK5 proteins highlighted key interactions for further hit-to lead optimization.

Lima Marilia N Nascimento, Borba Joyce V B, Cassiano Gustavo C, Mottin Melina, Mendonça Sabrina Silva, Silva Arthur C, Tomaz Kaira C P, Calit Juliana, Bargieri Daniel Y, Costa Fabio T M, Andrade Carolina Horta

2020-Nov-27

Machine learning, PK7, Virtual screening, malaria, shape-based

Radiology Radiology

Comparison of automated and manual DWI-ASPECTS in acute ischemic stroke: total and region-specific assessment.

In European radiology ; h5-index 62.0

OBJECTIVE : To compare the DWI-Alberta Stroke Program Early Computed Tomography Score calculated by a deep learning-based automatic software tool (eDWI-ASPECTS) with the neuroradiologists' evaluation for the acute stroke, with emphasis on its performance on 10 individual ASPECTS regions, and to determine the reasons for inconsistencies between eDWI-ASPECTS and neuroradiologists' evaluation.

METHODS : This retrospective study included patients with middle cerebral artery stroke who underwent MRI from 2010 to 2019. All scans were evaluated by eDWI-ASPECTS and two independent neuroradiologists (with 15 and 5 years of experience in stroke study). Inter-rater agreement and agreement between manual vs. automated methods for total and each region were evaluated by calculating Kendall's tau-b, intraclass correlation coefficient (ICC), and kappa coefficient.

RESULTS : In total, 309 patients met our study criteria. For total ASPECTS, eDWI-ASPECTS and manual raters had a strong positive correlation (Kendall's tau-b = 0.827 for junior raters vs. eDWI-ASPECTS; Kendall's tau-b = 0.870 for inter-raters; Kendall's tau-b = 0.848 for senior raters vs. eDWI-ASPECTS) and excellent agreement (ICC = 0.923 for junior raters and automated scores; ICC = 0.954 for inter-raters; ICC = 0.939 for senior raters and automated scores). Agreement was different for individual ASPECTS regions. All regions except for M5 region (κ = 0.216 for junior raters and automated scores), internal capsule (κ = 0.525 for junior raters and automated scores), and caudate (κ = 0.586 for senior raters and automated scores) showed good to excellent concordance.

CONCLUSION : The eDWI-ASPECTS performed equally well as senior neuroradiologists' evaluation, although interference by uncertain scoring rules and midline shift resulted in poor to moderate consistency in the M5, internal capsule, and caudate nucleus regions.

KEY POINTS : • The eDWI-ASPECTS based on deep learning perform equally well as senior neuroradiologists' evaluations. • Among the individual ASPECTS regions, the M5, internal capsule, and caudate regions mainly affected the overall consistency. • Uncertain scoring rules and midline shift are the main reasons for regional inconsistency.

Cheng XiaoQing, Su XiaoQin, Shi JiaQian, Liu QuanHui, Zhou ChangSheng, Dong Zheng, Xing Wei, Lu HaiTao, Pan ChengWei, Li XiuLi, Yu YiZhou, Zhang LongJiang, Lu GuangMing

2020-Nov-27

Brain ischemia, Magnetic resonance imaging, Middle cerebral artery, Stroke

General General

Reliable segmentation of 2D cardiac magnetic resonance perfusion image sequences using time as the 3rd dimension.

In European radiology ; h5-index 62.0

OBJECTIVES : Cardiac magnetic resonance (CMR) first-pass perfusion is an established noninvasive diagnostic imaging modality for detecting myocardial ischemia. A CMR perfusion sequence provides a time series of 2D images for dynamic contrast enhancement of the heart. Accurate myocardial segmentation of the perfusion images is essential for quantitative analysis and it can facilitate automated pixel-wise myocardial perfusion quantification.

METHODS : In this study, we compared different deep learning methodologies for CMR perfusion image segmentation. We evaluated the performance of several image segmentation methods using convolutional neural networks, such as the U-Net in 2D and 3D (2D plus time) implementations, with and without additional motion correction image processing step. We also present a modified U-Net architecture with a novel type of temporal pooling layer which results in improved performance.

RESULTS : The best DICE scores were 0.86 and 0.90 for LV myocardium and LV cavity, while the best Hausdorff distances were 2.3 and 2.1 pixels for LV myocardium and LV cavity using 5-fold cross-validation. The methods were corroborated in a second independent test set of 20 patients with similar performance (best DICE scores 0.84 for LV myocardium).

CONCLUSIONS : Our results showed that the LV myocardial segmentation of CMR perfusion images is best performed using a combination of motion correction and 3D convolutional networks which significantly outperformed all tested 2D approaches. Reliable frame-by-frame segmentation will facilitate new and improved quantification methods for CMR perfusion imaging.

KEY POINTS : • Reliable segmentation of the myocardium offers the potential to perform pixel level perfusion assessment. • A deep learning approach in combination with motion correction, 3D (2D + time) methods, and a deep temporal connection module produced reliable segmentation results.

Sandfort Veit, Jacobs Matthew, Arai Andrew E, Hsu Li-Yueh

2020-Nov-27

Cardiac magnetic resonance imaging, Deep learning, Image segmentation, Myocardial perfusion

General General

Abnormal microscale neuronal connectivity triggered by a proprioceptive stimulus in dystonia.

In Scientific reports ; h5-index 158.0

We investigated modulation of functional neuronal connectivity by a proprioceptive stimulus in sixteen young people with dystonia and eight controls. A robotic wrist interface delivered controlled passive wrist extension movements, the onset of which was synchronised with scalp EEG recordings. Data were segmented into epochs around the stimulus and up to 160 epochs per subject were averaged to produce a Stretch Evoked Potential (StretchEP). Event-related network dynamics were estimated using a methodology that features Wavelet Transform Coherency (WTC). Global Microscale Nodal Strength (GMNS) was introduced to estimate overall engagement of areas into short-lived networks related to the StretchEP, and Global Connectedness (GC) estimated the spatial extent of the StretchEP networks. Dynamic Connectivity Maps showed a striking difference between dystonia and controls, with particularly strong theta band event-related connectivity in dystonia. GC also showed a trend towards higher values in dystonia than controls. In summary, we demonstrate the feasibility of this method to investigate event-related neuronal connectivity in relation to a proprioceptive stimulus in a paediatric patient population. Young people with dystonia show an exaggerated network response to a proprioceptive stimulus, displaying both excessive theta-band synchronisation across the sensorimotor network and widespread engagement of cortical regions in the activated network.

Sakellariou Dimitris F, Dall’Orso Sofia, Burdet Etienne, Lin Jean-Pierre, Richardson Mark P, McClelland Verity M

2020-Nov-27

General General

Amoeba-inspired analog electronic computing system integrating resistance crossbar for solving the travelling salesman problem.

In Scientific reports ; h5-index 158.0

Combinatorial optimization to search for the best solution across a vast number of legal candidates requires the development of a domain-specific computing architecture that can exploit the computational power of physical processes, as conventional general-purpose computers are not powerful enough. Recently, Ising machines that execute quantum annealing or related mechanisms for rapid search have attracted attention. These machines, however, are hard to map application problems into their architecture, and often converge even at an illegal candidate. Here, we demonstrate an analogue electronic computing system for solving the travelling salesman problem, which mimics efficient foraging behaviour of an amoeboid organism by the spontaneous dynamics of an electric current in its core and enables a high problem-mapping flexibility and resilience using a resistance crossbar circuit. The system has high application potential, as it can determine a high-quality legal solution in a time that grows proportionally to the problem size without suffering from the weaknesses of Ising machines.

Saito Kenta, Aono Masashi, Kasai Seiya

2020-Nov-27

General General

Robustness and lethality in multilayer biological molecular networks.

In Nature communications ; h5-index 260.0

Robustness is a prominent feature of most biological systems. Most previous related studies have been focused on homogeneous molecular networks. Here we propose a comprehensive framework for understanding how the interactions between genes, proteins and metabolites contribute to the determinants of robustness in a heterogeneous biological network. We integrate heterogeneous sources of data to construct a multilayer interaction network composed of a gene regulatory layer, a protein-protein interaction layer, and a metabolic layer. We design a simulated perturbation process to characterize the contribution of each gene to the overall system's robustness, and find that influential genes are enriched in essential and cancer genes. We show that the proposed mechanism predicts a higher vulnerability of the metabolic layer to perturbations applied to genes associated with metabolic diseases. Furthermore, we find that the real network is comparably or more robust than expected in multiple random realizations. Finally, we analytically derive the expected robustness of multilayer biological networks starting from the degree distributions within and between layers. These results provide insights into the non-trivial dynamics occurring in the cell after a genetic perturbation is applied, confirming the importance of including the coupling between different layers of interaction in models of complex biological systems.

Liu Xueming, Maiorino Enrico, Halu Arda, Glass Kimberly, Prasad Rashmi B, Loscalzo Joseph, Gao Jianxi, Sharma Amitabh

2020-Nov-27

Pathology Pathology

A completely annotated whole slide image dataset of canine breast cancer to aid human breast cancer research.

In Scientific data

Canine mammary carcinoma (CMC) has been used as a model to investigate the pathogenesis of human breast cancer and the same grading scheme is commonly used to assess tumor malignancy in both. One key component of this grading scheme is the density of mitotic figures (MF). Current publicly available datasets on human breast cancer only provide annotations for small subsets of whole slide images (WSIs). We present a novel dataset of 21 WSIs of CMC completely annotated for MF. For this, a pathologist screened all WSIs for potential MF and structures with a similar appearance. A second expert blindly assigned labels, and for non-matching labels, a third expert assigned the final labels. Additionally, we used machine learning to identify previously undetected MF. Finally, we performed representation learning and two-dimensional projection to further increase the consistency of the annotations. Our dataset consists of 13,907 MF and 36,379 hard negatives. We achieved a mean F1-score of 0.791 on the test set and of up to 0.696 on a human breast cancer dataset.

Aubreville Marc, Bertram Christof A, Donovan Taryn A, Marzahl Christian, Maier Andreas, Klopfleisch Robert

2020-Nov-27

oncology Oncology

[Segmentation of organs at risk in nasopharyngeal cancer for radiotherapy using a self-adaptive Unet network].

In Nan fang yi ke da xue xue bao = Journal of Southern Medical University

OBJECTIVE : To investigate the accuracy of automatic segmentation of organs at risk (OARs) in radiotherapy for nasopharyngeal carcinoma (NPC).

METHODS : The CT image data of 147 NPC patients with manual segmentation of the OARs were randomized into the training set (115 cases), validation set (12 cases), and the test set (20 cases). An improved network based on three-dimensional (3D) Unet was established (named as AUnet) and its efficiency was improved through end-to-end training. Organ size was introduced as a priori knowledge to improve the performance of the model in convolution kernel size design, which enabled the network to better extract the features of different organs of different sizes. The adaptive histogram equalization algorithm was used to preprocess the input CT images to facilitate contour recognition. The similarity evaluation indexes, including Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD), were calculated to verify the validity of segmentation.

RESULTS : DSC and HD of the test dataset were 0.86±0.02 and 4.0±2.0 mm, respectively. No significant difference was found between the results of AUnet and manual segmentation of the OARs (P > 0.05) except for the optic nerves and the optic chiasm.

CONCLUSIONS : AUnet, an improved deep learning neural network, is capable of automatic segmentation of the OARs in radiotherapy for NPC based on CT images, and for most organs, the results are comparable to those of manual segmentation.

Yang Xin, Li Xueyan, Zhang Xiaoting, Song Fan, Huang Sijuan, Xia Yunfei

2020-Nov-30

AUnet, CT images, auto segmentation, deep learning, improved Unet architecture

Radiology Radiology

Improving the Diagnostic Accuracy of Breast BI-RADS 4 Microcalcification-Only Lesions Using Contrast-Enhanced Mammography.

In Clinical breast cancer

BACKGROUND : Contrast-enhanced mammography (CEM) is a novel breast imaging technique that can provide additional information of breast tissue blood supply. This study aimed to test the possibility of CEM in improving the diagnostic accuracy of Breast Imaging Reporting and Data System (BI-RADS) 4 calcification-only lesions with consideration of morphology and distribution.

PATIENTS AND METHODS : Data of patients with suspicious malignant calcification-only lesions (BI-RADS 4) on low-energy CEM and proved pathologic diagnoses were retrospectively collected. Two junior radiologists independently reviewed the two sets of CEM images, low-energy images (LE) to describe the calcifications by morphology and distribution type, and recombined images (CE) to record the presence of enhancement. Low-risk and high-risk groups were divided by calcification morphology, distribution, and both, respectively. Positive predictive values and misdiagnosis rates (MDR) were compared between LE-only reading and CE reading. Diagnostic performance was also tested using machine learning method.

RESULTS : The study included 74 lesions (26 malignant and 48 benign). Positive predictive values were significantly higher and MDRs were significantly lower using CE images than using LE alone for both the low-risk morphology type and low-risk distribution type (P < .05). MDRs were significantly lower when using CE images (18.18%-24.00%) than using LE images alone in low-risk group (76.36%-80.00%) (P < .05). Using a machine learning method, significant improvements in the area under the receiver operating characteristic curve were observed in both low-risk and high-risk groups.

CONCLUSION : CEM has the potential to aid in the diagnosis of BI-RADS 4 calcification-only lesions; in particular, those presented as low risk in morphology and/or distribution may benefit more.

Long Rong, Cao Kun, Cao Min, Li Xiao-Ting, Gao Fei, Zhang Fan-Dong, Yu Yi-Zhou, Sun Ying-Shi

2020-Nov-02

Breast cancer, Calcification, Contrast media, Dual energy, Positive predictive value

General General

Reliable or not? An automated classification of webpages about early childhood vaccination using supervised machine learning.

In Patient education and counseling

OBJECTIVE : To investigate the applicability of supervised machine learning (SML) to classify health-related webpages as 'reliable' or 'unreliable' in an automated way.

METHODS : We collected the textual content of 468 different Dutch webpages about early childhood vaccination. Webpages were manually coded as 'reliable' or 'unreliable' based on their alignment with evidence-based vaccination guidelines. Four SML models were trained on part of the data, whereas the remaining data was used for model testing.

RESULTS : All models appeared to be successful in the automated identification of unreliable (F1 scores: 0.54-0.86) and reliable information (F1 scores: 0.82-0.91). Typical words for unreliable information are 'dr', 'immune system', and 'vaccine damage', whereas 'measles', 'child', and 'immunization rate', were frequent in reliable information. Our best performing model was also successful in terms of out-of-sample prediction, tested on a dataset about HPV vaccination.

CONCLUSION : Automated classification of online content in terms of reliability, using basic classifiers, performs well and is particularly useful to identify reliable information.

PRACTICE IMPLICATIONS : The classifiers can be used as a starting point to develop more complex classifiers, but also warning tools which can help people evaluate the content they encounter online.

Meppelink Corine S, Hendriks Hanneke, Trilling Damian, van Weert Julia C M, Shao Anqi, Smit Eline S

2020-Nov-12

Consumer health information, Misinformation, Reliability, Supervised machine learning, Vaccination

General General

Predicting carbonaceous aerosols and identifying their source contribution with advanced approaches.

In Chemosphere

Organic carbon (OC) and elemental carbon (EC) play important roles in various atmospheric processes and health effects. Predicting carbonaceous aerosols and identifying source contributions are important steps for further epidemiological study and formulating effective emission control policies. However, we are not aware of any study that examined predictions of OC and EC, and this work is also the first study that attempted to use machine learning and hyperparameter optimization method to predict concentrations of specific aerosol contaminants. This paper describes an investigation of the characteristics and sources of OC and EC in fine particulate matter (PM2.5) from 2005 to 2010 in the City of Taipei. Respective hourly average concentrations of OC and EC were 5.2 μg/m3 and 1.6 μg/m3. We observed obvious seasonal variation in OC but not in EC. Hourly and daily OC and EC concentrations were predicted using generalized additive model and grey wolf optimized multilayer perceptron model, which could explain up to about 80% of the total variation. Subsequent clustering suggests that traffic emission was the major contribution to OC, accounting for about 80% in the spring, 65% in the summer, and 90% in the fall and winter. In the Taipei area, local emissions were the dominant sources of OC and EC in all seasons, and long-range transport had a significant contribution to OC and in PM2.5 in spring.

Zhu Jun-Jie, Chen Yu-Cheng, Shie Ruei-Hao, Liu Zhen-Shu, Hsu Chin-Yu

2020-Nov-13

Clustering, Elemental carbon, Hyperparameter optimization method, Machine learning, Organic carbon, Source apportionment

Radiology Radiology

Machine Learning and Improved Quality Metrics in Acute Intracranial Hemorrhage by Noncontrast Computed Tomography.

In Current problems in diagnostic radiology

OBJECTIVE : The timely reporting of critical results in radiology is paramount to improved patient outcomes. Artificial intelligence has the ability to improve quality by optimizing clinical radiology workflows. We sought to determine the impact of a United States Food and Drug Administration-approved machine learning (ML) algorithm, meant to mark computed tomography (CT) head examinations pending interpretation as higher probability for intracranial hemorrhage (ICH), on metrics across our healthcare system. We hypothesized that ML is associated with a reduction in report turnaround time (RTAT) and length of stay (LOS) in emergency department (ED) and inpatient populations.

MATERIALS AND METHODS : An ML algorithm was incorporated across CT scanners at imaging sites in January 2018. RTAT and LOS were derived for reports and patients between July 2017 and December 2017 prior to implementation of ML and compared to those between January 2018 and June 2018 after implementation of ML. A total of 25,658 and 24,996 ED and inpatient cases were evaluated across the entire healthcare system before and after ML, respectively.

RESULTS : RTAT decreased from 75 to 69 minutes (P <0.001) at all facilities in the healthcare system. At the level 1 trauma center specifically, RTAT decreased from 67 to 59 minutes (P <0.001). ED LOS decreased from 471 to 425 minutes (P <0.001) for patients without ICH, and from 527 to 491 minutes for those with ICH (P = 0.456). Inpatient LOS decreased from 18.4 to 15.8 days for those without ICH (P = 0.001) and 18.1 to 15.8 days for those with ICH (P = 0.02).

CONCLUSION : We demonstrated that utilization of ML was associated with a statistically significant decrease in RTAT. There was also a significant decrease in LOS for ED patients without ICH, but not for ED patients with ICH. Further evaluation of the impact of such tools on patient care and outcomes is needed.

Davis Melissa A, Rao Balaji, Cedeno Paul A, Saha Atin, Zohrabian Vahe M

2020-Nov-15

General General

Decoding semi-automated title-abstract screening: findings from a convenience sample of reviews.

In Systematic reviews

BACKGROUND : We evaluated the benefits and risks of using the Abstrackr machine learning (ML) tool to semi-automate title-abstract screening and explored whether Abstrackr's predictions varied by review or study-level characteristics.

METHODS : For a convenience sample of 16 reviews for which adequate data were available to address our objectives (11 systematic reviews and 5 rapid reviews), we screened a 200-record training set in Abstrackr and downloaded the relevance (relevant or irrelevant) of the remaining records, as predicted by the tool. We retrospectively simulated the liberal-accelerated screening approach. We estimated the time savings and proportion missed compared with dual independent screening. For reviews with pairwise meta-analyses, we evaluated changes to the pooled effects after removing the missed studies. We explored whether the tool's predictions varied by review and study-level characteristics.

RESULTS : Using the ML-assisted liberal-accelerated approach, we wrongly excluded 0 to 3 (0 to 14%) records that were included in the final reports, but saved a median (IQR) 26 (9, 42) h of screening time. One missed study was included in eight pairwise meta-analyses in one systematic review. The pooled effect for just one of those meta-analyses changed considerably (from MD (95% CI) - 1.53 (- 2.92, - 0.15) to - 1.17 (- 2.70, 0.36)). Of 802 records in the final reports, 87% were correctly predicted as relevant. The correctness of the predictions did not differ by review (systematic or rapid, P = 0.37) or intervention type (simple or complex, P = 0.47). The predictions were more often correct in reviews with multiple (89%) vs. single (83%) research questions (P = 0.01), or that included only trials (95%) vs. multiple designs (86%) (P = 0.003). At the study level, trials (91%), mixed methods (100%), and qualitative (93%) studies were more often correctly predicted as relevant compared with observational studies (79%) or reviews (83%) (P = 0.0006). Studies at high or unclear (88%) vs. low risk of bias (80%) (P = 0.039), and those published more recently (mean (SD) 2008 (7) vs. 2006 (10), P = 0.02) were more often correctly predicted as relevant.

CONCLUSION : Our screening approach saved time and may be suitable in conditions where the limited risk of missing relevant records is acceptable. Several of our findings are paradoxical and require further study to fully understand the tasks to which ML-assisted screening is best suited. The findings should be interpreted in light of the fact that the protocol was prepared for the funder, but not published a priori. Because we used a convenience sample, the findings may be prone to selection bias. The results may not be generalizable to other samples of reviews, ML tools, or screening approaches. The small number of missed studies across reviews with pairwise meta-analyses hindered strong conclusions about the effect of missed studies on the results and conclusions of systematic reviews.

Gates Allison, Gates Michelle, DaRosa Daniel, Elliott Sarah A, Pillay Jennifer, Rahman Sholeh, Vandermeer Ben, Hartling Lisa

2020-Nov-27

Artificial intelligence, Efficiency, Machine learning, Methods, Systematic reviews, Text mining

Radiology Radiology

Artificial intelligence in image reconstruction: The change is here.

In Physica medica : PM : an international journal devoted to the applications of physics to medicine and biology : official journal of the Italian Association of Biomedical Physics (AIFB)

Innovations in CT have been impressive among imaging and medical technologies in both the hardware and software domain. The range and speed of CT scanning improved from the introduction of multidetector-row CT scanners with wide-array detectors and faster gantry rotation speeds. To tackle concerns over rising radiation doses from its increasing use and to improve image quality, CT reconstruction techniques evolved from filtered back projection to commercial release of iterative reconstruction techniques, and recently, of deep learning (DL)-based image reconstruction. These newer reconstruction techniques enable improved or retained image quality versus filtered back projection at lower radiation doses. DL can aid in image reconstruction with training data without total reliance on the physical model of the imaging process, unique artifacts of PCD-CT due to charge sharing, K-escape, fluorescence x-ray emission, and pulse pileups can be handled in the data-driven fashion. With sufficiently reconstructed images, a well-designed network can be trained to upgrade image quality over a practical/clinical threshold or define new/killer applications. Besides, the much smaller detector pixel for PCD-CT can lead to huge computational costs with traditional model-based iterative reconstruction methods whereas deep networks can be much faster with training and validation. In this review, we present techniques, applications, uses, and limitations of deep learning-based image reconstruction methods in CT.

Singh Ramandeep, Wu Weiwen, Wang Ge, Kalra Mannudeep K

2020-Nov-24

Artificial Intelligence, Computed tomography, Deep learning, Image reconstruction

Radiology Radiology

Machine learning for lung CT texture analysis: Improvement of inter-observer agreement for radiological finding classification in patients with pulmonary diseases.

In European journal of radiology ; h5-index 47.0

PURPOSE : To evaluate the capability ML-based CT texture analysis for improving interobserver agreement and accuracy of radiological finding assessment in patients with COPD, interstitial lung diseases or infectious diseases.

MATERIALS AND METHODS : Training cases (n = 28), validation cases (n = 17) and test cases (n = 89) who underwent thin-section CT at a 320-detector row CT with wide volume scan and two 64-detector row CTs with helical scan were enrolled in this study. From 89 CT data, a total of 350 computationally selected ROI including normal lung, emphysema, nodular lesion, ground-glass opacity, reticulation and honeycomb were evaluated by three radiologists as well as by the software. Inter-observer agreements between consensus reading with and without using the software or software alone and standard references determined by consensus of pulmonologists and chest radiologists were determined using κ statistics. Overall distinguishing accuracies were compared among all methods by McNemar's test.

RESULTS : Agreements for consensus readings obtained with and without the software or the software alone with standard references were determined as significant and substantial or excellent (with the software: κ = 0.91, p < 0.0001; without the software: κ = 0.81, p < 0.0001; the software alone: κ = 0.79, p < 0.0001). Overall differentiation accuracy of consensus reading using the software (94.9 [332/350] %) was significantly higher than that of consensus reading without using the software (84.3 [295/350] %, p < 0.0001) and the software alone (82.3 [288/350] %, p < 0.0001).

CONCLUSION : ML-based CT texture analysis software has potential for improving interobserver agreement and accuracy for radiological finding assessments in patients with COPD, interstitial lung diseases or infectious diseases.

Ohno Yoshiharu, Aoyagi Kota, Takenaka Daisuke, Yoshikawa Takeshi, Ikezaki Aina, Fujisawa Yasuko, Murayama Kazuhiro, Hattori Hidekazu, Toyama Hiroshi

2020-Nov-12

COPD, CT, Connective tissue disease, Interstitial lung disease, Lung

Dermatology Dermatology

Artificial intelligence-based image classification methods for diagnosis of skin cancer: Challenges and opportunities.

In Computers in biology and medicine

Recently, there has been great interest in developing Artificial Intelligence (AI) enabled computer-aided diagnostics solutions for the diagnosis of skin cancer. With the increasing incidence of skin cancers, low awareness among a growing population, and a lack of adequate clinical expertise and services, there is an immediate need for AI systems to assist clinicians in this domain. A large number of skin lesion datasets are available publicly, and researchers have developed AI solutions, particularly deep learning algorithms, to distinguish malignant skin lesions from benign lesions in different image modalities such as dermoscopic, clinical, and histopathology images. Despite the various claims of AI systems achieving higher accuracy than dermatologists in the classification of different skin lesions, these AI systems are still in the very early stages of clinical application in terms of being ready to aid clinicians in the diagnosis of skin cancers. In this review, we discuss advancements in the digital image-based AI solutions for the diagnosis of skin cancer, along with some challenges and future opportunities to improve these AI systems to support dermatologists and enhance their ability to diagnose skin cancer.

Goyal Manu, Knackstedt Thomas, Yan Shaofeng, Hassanpour Saeed

2020-Oct-27

Artificial intelligence, Computer-aided diagnostics, Deep learning, Dermatologists, Digital dermatology, Skin cancer

General General

A deep learning framework for quality assessment and restoration in video endoscopy.

In Medical image analysis

Endoscopy is a routine imaging technique used for both diagnosis and minimally invasive surgical treatment. Artifacts such as motion blur, bubbles, specular reflections, floating objects and pixel saturation impede the visual interpretation and the automated analysis of endoscopy videos. Given the widespread use of endoscopy in different clinical applications, robust and reliable identification of such artifacts and the automated restoration of corrupted video frames is a fundamental medical imaging problem. Existing state-of-the-art methods only deal with the detection and restoration of selected artifacts. However, typically endoscopy videos contain numerous artifacts which motivates to establish a comprehensive solution. In this paper, a fully automatic framework is proposed that can: 1) detect and classify six different artifacts, 2) segment artifact instances that have indefinable shapes, 3) provide a quality score for each frame, and 4) restore partially corrupted frames. To detect and classify different artifacts, the proposed framework exploits fast, multi-scale and single stage convolution neural network detector. In addition, we use an encoder-decoder model for pixel-wise segmentation of irregular shaped artifacts. A quality score is introduced to assess video frame quality and to predict image restoration success. Generative adversarial networks with carefully chosen regularization and training strategies for discriminator-generator networks are finally used to restore corrupted frames. The detector yields the highest mean average precision (mAP) of 45.7 and 34.7, respectively for 25% and 50% IoU thresholds, and the lowest computational time of 88 ms allowing for near real-time processing. The restoration models for blind deblurring, saturation correction and inpainting demonstrate significant improvements over previous methods. On a set of 10 test videos, an average of 68.7% of video frames successfully passed the quality score (≥0.9) after applying the proposed restoration framework thereby retaining 25% more frames compared to the raw videos. The importance of artifacts detection and their restoration on improved robustness of image analysis methods is also demonstrated in this work.

Ali Sharib, Zhou Felix, Bailey Adam, Braden Barbara, East James E, Lu Xin, Rittscher Jens

2020-Nov-13

Convolution neural networks, Frame restoration, Multi-class artifact detection, Multi-class artifact segmentation, Video endoscopy

Radiology Radiology

A deep learning framework for pancreas segmentation with multi-atlas registration and 3D level-set.

In Medical image analysis

In this paper, we propose and validate a deep learning framework that incorporates both multi-atlas registration and level-set for segmenting pancreas from CT volume images. The proposed segmentation pipeline consists of three stages, namely coarse, fine, and refine stages. Firstly, a coarse segmentation is obtained through multi-atlas based 3D diffeomorphic registration and fusion. After that, to learn the connection feature, a 3D patch-based convolutional neural network (CNN) and three 2D slice-based CNNs are jointly used to predict a fine segmentation based on a bounding box determined from the coarse segmentation. Finally, a 3D level-set method is used, with the fine segmentation being one of its constraints, to integrate information of the original image and the CNN-derived probability map to achieve a refine segmentation. In other words, we jointly utilize global 3D location information (registration), contextual information (patch-based 3D CNN), shape information (slice-based 2.5D CNN) and edge information (3D level-set) in the proposed framework. These components form our cascaded coarse-fine-refine segmentation framework. We test the proposed framework on three different datasets with varying intensity ranges obtained from different resources, respectively containing 36, 82 and 281 CT volume images. In each dataset, we achieve an average Dice score over 82%, being superior or comparable to other existing state-of-the-art pancreas segmentation algorithms.

Zhang Yue, Wu Jiong, Liu Yilong, Chen Yifan, Chen Wei, Wu Ed X, Li Chunming, Tang Xiaoying

2020-Oct-28

Deep learning, Level-set, Multi-atlas registration, Pancreas segmentation

General General

Effect of deep transfer and multi-task learning on sperm abnormality detection.

In Computers in biology and medicine

Analyzing the abnormality of morphological characteristics of male human sperm has been studied for a long time mainly because it has many implications on the male infertility problem, which accounts for approximately half of the infertility problems in the world. Yet, detecting such abnormalities by embryologists has several downsides. To clarify, analyzing sperms through visual inspection of an expert embryologist is a highly subjective and biased process. Furthermore, it takes much time for a specialist to make a diagnosis. Hence, in this paper, we proposed two deep learning algorithms that are able to automate this process. The first algorithm uses a network-based deep transfer learning approach, while the second technique, named Deep Multi-task Transfer Learning (DMTL), employs a novel combination of network-based deep transfer learning and multi-task learning to classify sperm's head, vacuole, and acrosome as either normal or abnormal. This DMTL technique is capable of classifying all the aforementioned parts of the sperm in a single prediction. Moreover, this is the first time that the concept of multi-task learning has been introduced to the field of Sperm Morphology Analysis (SMA). To benchmark our algorithms, we employed a freely-available SMA dataset named MHSMA. During our experiments, our algorithms reached the state-of-the-art results on the accuracy, precision, and f0.5, as well as other important metrics, such as the Matthews Correlation Coefficient on one, two, or all three labels. Notably, our algorithms increased the accuracy of the head, acrosome, and vacuole by 6.66%, 3.00%, and 1.33%, and reached the accuracy of 84.00%, 80.66%, and 94.00% on these labels, respectively. Consequently, our algorithms can be used in health institutions, such as fertility clinics, with further recommendations to practically improve the performance of our algorithms.

Abbasi Amir, Miahi Erfan, Mirroshandel Seyed Abolghasem

2020-Nov-21

Deep learning, Human sperm morphometry, Infertility, Multi-task learning, Transfer learning

General General

TP53-Associated Ion Channel Genes Serve as Prognostic Predictor and Therapeutic Targets in Head and Neck Squamous Cell Carcinoma.

In Technology in cancer research & treatment

TP53 mutations are the most occurred mutation in HNSCC which might affect the ion channel genes. We aim to investigate the ion channel gene alteration under TP53 mutation and their prognostic implication. The overall mutation status of HNSCC were explored. By screening the TP53-associated ion channel genes (TICGs), an ion channel prognostic signature (ICPS) was established through a series of machine learning algorithms. The ICPS was then evaluated and its clinical significance was explored. 82 TICGs differentially expressed between TP53WT and TP53MUT were screened. Using univariate regression analysis and LASSO regression analysis and multivariate regression analysis, an ICPS containing 7 ion channel genes was established. A series of evaluation was carried out which proved the predictive ability of ICPS. Functional analysis of ICPS revealed that cancer-related pathways were enriched in high-risk group. Next, for clinical application, a nomogram was constructed based on ICPS and other independent clinicopathological factors. TP53 mutation status strongly affects the expression of ion channel genes. The ICPM we have identified is a strong indicator for HNSCC prognosis and could help with patient stratification as well as identification of novel drug targets.

Sun Jing, Yu Xijiao, Xue Lande, Li Shu, Li Jianxia, Tong Dongdong, Du Yi

BIOMARKER, PROGNOSIS, TP53, head and neck squamous cell carcinoma, ion channel, therapeutic targets

General General

A Novel Instrumented Shoulder Functional Test Using Wearable Sensors in Patients with Brachial Plexus Injury.

In Journal of shoulder and elbow surgery ; h5-index 54.0

BACKGROUND : Since nerve injury of muscles around the shoulder can be easily disguised by "trick movements" of the trunk, shoulder dysfunction following brachial plexus injury is difficult to quantify with conventional clinical tools. Thus, to evaluate brachial plexus injury and quantify its biomechanical consequences, we used inertial measurement units, which offer the sensitivity required to measure the trunk's subtle movements.

METHODS : We calculated six kinematic scores using inertial measurement units placed on the upper arms and the trunk during nine functional tasks. We used both statistical and machine learning techniques to compare the bilateral asymmetry of the kinematic scores of fifteen affected and fifteen able-bodied individuals (controls).

RESULTS : Asymmetry indexes from several kinematic scores of the upper arm and trunk showed a significant difference (p<0.05) between the affected and control groups. A bagged ensemble of decision trees trained with trunk and upper arm kinematic scores correctly classified all controls. All but two patients were also correctly classified. Upper arm scores showed correlation coefficients ranging from 0.55 to 0.76 with conventional clinical scores.

CONCLUSIONS : The proposed wearable technology is a sensitive and reliable tool for objective outcome evaluation of brachial plexus injury and its biomechanical consequences. It may be useful in clinical research and practice, especially in large cohorts with multiple follow-ups.

Nazarahari Milad, Chan K Ming, Rouhani Hossein

2020-Nov-24

Brachial plexus injury, Inertial measurement unit, Objective outcome evaluation, Shoulder kinematics, Trunk trick movement

Pathology Pathology

Challenges in the Development, Deployment & Regulation of Artificial Intelligence (AI) in Anatomical Pathology.

In The American journal of pathology ; h5-index 54.0

Artificial intelligence (AI), deep learning, and other machine learning approaches have made significant advances in recent years, finding applications in almost every industry, including healthcare. AI has proven to be capable of a spectrum of mundane to complex medically oriented tasks previously only performed by boarded physicians, most recently assisting detection of difficult-to-find cancer on histopathology slides. Although computers will not replace pathologists anytime soon, properly designed AI-based tools hold great potential to increase workflow efficiency and diagnostic accuracy in the practice of pathology. Recent trends, such as data augmentation, crowd-sourcing to generate annotated datasets, and unsupervised learning with molecular and/or clinical outcomes versus human diagnoses as a source of ground truth, are eliminating the direct role for pathologists in algorithm development. Proper integration of AI-based systems into anatomical pathology practice will necessarily require fully digital imaging platforms, an overhaul of legacy information technology infrastructures, modification of laboratory/pathologist workflows, appropriate reimbursement/cost-offsetting models, and ultimately, active participation of pathologists to encourage buy-in and oversight. Regulations tailored to the nature and limitations of AI are currently in development and, when instituted, should promote safe and effective use. This review addresses the challenges in AI development, deployment and regulation to be overcome prior to its widespread adoption in anatomical pathology.

Cheng Jerome Y, Abel Jacob T, Balis Ulysses G J, McClintock David S, Pantanowitz Liron

2020-Nov-24

General General

New expectations for diastolic function assessment in transthoracic echocardiography based on a semi-automated computing of strain-volume loops.

In European heart journal cardiovascular Imaging

AIMS : Early diagnosis of heart failure with preserved ejection fraction (HFpEF) by determination of diastolic dysfunction is challenging. Strain-volume loop (SVL) is a new tool to analyse left ventricular function. We propose a new semi-automated method to calculate SVL area and explore the added value of this index for diastolic function assessment.

METHOD AND RESULTS : Fifty patients (25 amyloidosis, 25 HFpEF) were included in the study and compared with 25 healthy control subjects. Left ventricular ejection fraction was preserved and similar between groups. Classical indices of diastolic function were pathological in HFpEF and amyloidosis groups with greater left atrial volume index, greater mitral average E/e' ratio, faster tricuspid regurgitation (P < 0.0001 compared with controls). SVL analysis demonstrated a significant difference of the global area between groups, with the smaller area in amyloidosis group, the greater in controls and a mid-range value in HFpEF group (37 vs. 120 vs. 72 mL.%, respectively, P < 0.0001). Applying a linear discriminant analysis (LDA) classifier, results show a mean area under the curve of 0.91 for the comparison between HFpEF and amyloidosis groups.

CONCLUSION : SVLs area is efficient to identify patients with a diastolic dysfunction. This new semi-automated tool is very promising for future development of automated diagnosis with machine-learning algorithms.

Hubert Arnaud, Le Rolle Virginie, Galli Elena, Bidaud Auriane, Hernandez Alfredo, Donal Erwan

2020-Dec-01

diastolic function, echocardiography, strain, strain–volume loop

General General

Identification of novel prognostic markers of survival time in high-risk neuroblastoma using gene expression profiles.

In Oncotarget ; h5-index 104.0

Neuroblastoma is the most common extracranial solid tumor in childhood. Patients in high-risk group often have poor outcomes with low survival rates despite several treatment options. This study aimed to identify a genetic signature from gene expression profiles that can serve as prognostic indicators of survival time in patients of high-risk neuroblastoma, and that could be potential therapeutic targets. RNA-seq count data was downloaded from UCSC Xena browser and samples grouped into Short Survival (SS) and Long Survival (LS) groups. Differential gene expression (DGE) analysis, enrichment analyses, regulatory network analysis and machine learning (ML) prediction of survival group were performed. Forty differentially expressed genes (DEGs) were identified including genes involved in molecular function activities essential for tumor proliferation. DEGs used as features for prediction of survival groups included EVX2, NHLH2, PRSS12, POU6F2, HOXD10, MAPK15, RTL1, LGR5, CYP17A1, OR10AB1P, MYH14, LRRTM3, GRIN3A, HS3ST5, CRYAB and NXPH3. An accuracy score of 82% was obtained by the ML classification models. SMIM28 was revealed to possibly have a role in tumor proliferation and aggressiveness. Our results indicate that these DEGs can serve as prognostic indicators of survival in high-risk neuroblastoma patients and will assist clinicians in making better therapeutic and patient management decisions.

Giwa Abdulazeez, Fatai Azeez, Gamieldien Junaid, Christoffels Alan, Bendou Hocine

2020-Nov-17

differential gene expression, gene regulatory networks, machine learning, neuroblastoma, prognostic markers

General General

NonClasGP-Pred: robust and efficient prediction of non-classically secreted proteins by integrating subset-specific optimal models of imbalanced data.

In Microbial genomics

Non-classically secreted proteins (NCSPs) are proteins that are located in the extracellular environment, although there is a lack of known signal peptides or secretion motifs. They usually perform different biological functions in intracellular and extracellular environments, and several of their biological functions are linked to bacterial virulence and cell defence. Accurate protein localization is essential for all living organisms, however, the performance of existing methods developed for NCSP identification has been unsatisfactory and in particular suffer from data deficiency and possible overfitting problems. Further improvement is desirable, especially to address the lack of informative features and mining subset-specific features in imbalanced datasets. In the present study, a new computational predictor was developed for NCSP prediction of gram-positive bacteria. First, to address the possible prediction bias caused by the data imbalance problem, ten balanced subdatasets were generated for ensemble model construction. Then, the F-score algorithm combined with sequential forward search was used to strengthen the feature representation ability for each of the training subdatasets. Third, the subset-specific optimal feature combination process was adopted to characterize the original data from different aspects, and all subdataset-based models were integrated into a unified model, NonClasGP-Pred, which achieved an excellent performance with an accuracy of 93.23 %, a sensitivity of 100 %, a specificity of 89.01 %, a Matthew's correlation coefficient of 87.68 % and an area under the curve value of 0.9975 for ten-fold cross-validation. Based on assessment on the independent test dataset, the proposed model outperformed state-of-the-art available toolkits. For availability and implementation, see: http://lab.malab.cn/~wangchao/softwares/NonClasGP/.

Wang Chao, Wu Jin, Xu Lei, Zou Quan

2020-Nov-27

feature selection, imbalanced dataset, machine learning, model ensemble, non-classically secreted proteins

General General

Mitigation of ocular artifacts for EEG signal using improved earth worm optimization-based neural network and lifting wavelet transform.

In Computer methods in biomechanics and biomedical engineering

An Electroencephalogram (EEG) is often tarnished by various categories of artifacts. Numerous efforts have been taken to improve its quality by eliminating the artifacts. The EEG involves the biological artifacts (ocular artifacts, ECG and EMG artifacts), and technical artifacts (noise from the electric power source, amplitude artifacts, etc.). From these physiological artifacts, ocular activities are one of the most well-known over other noise sources. Reducing the risks of this event and avoid it is practically very difficult, even impossible, as the ocular activities are involuntary tasks. To trim down the effect of ocular artifacts overlapping with EEG signal and overwhelm the subjected flaws, few intelligent approaches have to be developed. This proposal tempts to implement a novel method for detecting and preventing ocular artifacts from the EEG signal. The developed model involves two main phases: (a) Detection of Ocular artifacts and (b) Removal of ocular artifacts. For detecting the ocular artifacts, initially, the EEG is subjected to decomposition process using 5-level Discrete Wavelet Transform (DWT), and Empirical Mean Curve Decomposition (EMCD). Next to the decomposition process, the features like kurtosis, variance, Shannon's entropy, and few first-order statistical features are extracted. These features will be helpful for the detection process in the classification side. For detecting the ocular artifacts from the decomposed signal, the extracted features are subjected to a machine learning algorithm called Neural Network (NN). As an improvement to the conventional NN, the training algorithm of ANN is improved by the improved Earth Worm optimization Algorithm (EWA) termed as Dual Positioned Elitism-based EWA (DPE-EWA), which updates the weight of NN to improve the performance. In the Removal phase, the optimized Lifting Wavelet Transform (LWT) is deployed, in which the improvement is made on optimizing the filter coefficients using the proposed DPE-EWA. Thus, the integration of optimized NN and optimized LWT suggests a potential possibility to accommodate the detection and removal of ocular artifacts that exist in the EEG signals.

Prasad Devulapalli Shyam, Chanamallu Srinivasa Rao, Prasad Kodati Satya

2020-Nov-27

Electroencephalogram, detection and removal, dual positioned elitism-based earth worm optimization algorithm, lifting wavelet, neural network, ocular artifacts

General General

The 2019 n2c2/OHNLP Track on Clinical Semantic Textual Similarity: Overview.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Semantic textual similarity is a common task in the general English domain to assess the degree to which the underlying semantics of 2 text segments are equivalent to each other. Clinical Semantic Textual Similarity (ClinicalSTS) is the semantic textual similarity task in the clinical domain that attempts to measure the degree of semantic equivalence between 2 snippets of clinical text. Due to the frequent use of templates in the Electronic Health Record system, a large amount of redundant text exists in clinical notes, making ClinicalSTS crucial for the secondary use of clinical text in downstream clinical natural language processing applications, such as clinical text summarization, clinical semantics extraction, and clinical information retrieval.

OBJECTIVE : Our objective was to release ClinicalSTS data sets and to motivate natural language processing and biomedical informatics communities to tackle semantic text similarity tasks in the clinical domain.

METHODS : We organized the first BioCreative/OHNLP ClinicalSTS shared task in 2018 by making available a real-world ClinicalSTS data set. We continued the shared task in 2019 in collaboration with National NLP Clinical Challenges (n2c2) and the Open Health Natural Language Processing (OHNLP) consortium and organized the 2019 n2c2/OHNLP ClinicalSTS track. We released a larger ClinicalSTS data set comprising 1642 clinical sentence pairs, including 1068 pairs from the 2018 shared task and 1006 new pairs from 2 electronic health record systems, GE and Epic. We released 80% (1642/2054) of the data to participating teams to develop and fine-tune the semantic textual similarity systems and used the remaining 20% (412/2054) as blind testing to evaluate their systems. The workshop was held in conjunction with the American Medical Informatics Association 2019 Annual Symposium.

RESULTS : Of the 78 international teams that signed on to the n2c2/OHNLP ClinicalSTS shared task, 33 produced a total of 87 valid system submissions. The top 3 systems were generated by IBM Research, the National Center for Biotechnology Information, and the University of Florida, with Pearson correlations of r=.9010, r=.8967, and r=.8864, respectively. Most top-performing systems used state-of-the-art neural language models, such as BERT and XLNet, and state-of-the-art training schemas in deep learning, such as pretraining and fine-tuning schema, and multitask learning. Overall, the participating systems performed better on the Epic sentence pairs than on the GE sentence pairs, despite a much larger portion of the training data being GE sentence pairs.

CONCLUSIONS : The 2019 n2c2/OHNLP ClinicalSTS shared task focused on computing semantic similarity for clinical text sentences generated from clinical notes in the real world. It attracted a large number of international teams. The ClinicalSTS shared task could continue to serve as a venue for researchers in natural language processing and medical informatics communities to develop and improve semantic textual similarity techniques for clinical text.

Wang Yanshan, Fu Sunyang, Shen Feichen, Henry Sam, Uzuner Ozlem, Liu Hongfang

2020-Nov-27

ClinicalSTS, challenge, clinical natural language processing, electronic health records, medical natural language processing, n2c2, natural language processing, semantic textual similarity, shared task

General General

Detection of Suicidality Among Opioid Users on Reddit: Machine Learning-Based Approach.

In Journal of medical Internet research ; h5-index 88.0

BACKGROUND : In recent years, both suicide and overdose rates have been increasing. Many individuals who struggle with opioid use disorder are prone to suicidal ideation; this may often result in overdose. However, these fatal overdoses are difficult to classify as intentional or unintentional. Intentional overdose is difficult to detect, partially due to the lack of predictors and social stigmas that push individuals away from seeking help. These individuals may instead use web-based means to articulate their concerns.

OBJECTIVE : This study aimed to extract posts of suicidality among opioid users on Reddit using machine learning methods. The performance of the models is derivative of the data purity, and the results will help us to better understand the rationale of these users, providing new insights into individuals who are part of the opioid epidemic.

METHODS : Reddit posts between June 2017 and June 2018 were collected from r/suicidewatch, r/depression, a set of opioid-related subreddits, and a control subreddit set. We first classified suicidal versus nonsuicidal languages and then classified users with opioid usage versus those without opioid usage. Several traditional baselines and neural network (NN) text classifiers were trained using subreddit names as the labels and combinations of semantic inputs. We then attempted to extract out-of-sample data belonging to the intersection of suicide ideation and opioid abuse. Amazon Mechanical Turk was used to provide labels for the out-of-sample data.

RESULTS : Classification results were at least 90% across all models for at least one combination of input; the best classifier was convolutional neural network, which obtained an F1 score of 96.6%. When predicting out-of-sample data for posts containing both suicidal ideation and signs of opioid addiction, NN classifiers produced more false positives and traditional methods produced more false negatives, which is less desirable for predicting suicidal sentiments.

CONCLUSIONS : Opioid abuse is linked to the risk of unintentional overdose and suicide risk. Social media platforms such as Reddit contain metadata that can aid machine learning and provide information at a personal level that cannot be obtained elsewhere. We demonstrate that it is possible to use NNs as a tool to predict an out-of-sample target with a model built from data sets labeled by characteristics we wish to distinguish in the out-of-sample target.

Yao Hannah, Rashidian Sina, Dong Xinyu, Duanmu Hongyi, Rosenthal Richard N, Wang Fusheng

2020-Nov-27

deep learning, machine learning, natural language processing, opioid epidemic, opioid-related disorders, social media, suicide

General General

Identification of Semantically Similar Sentences in Clinical Notes: Iterative Intermediate Training Using Multi-Task Learning.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Although electronic health records (EHRs) have been widely adopted in health care, effective use of EHR data is often limited because of redundant information in clinical notes introduced by the use of templates and copy-paste during note generation. Thus, it is imperative to develop solutions that can condense information while retaining its value. A step in this direction is measuring the semantic similarity between clinical text snippets. To address this problem, we participated in the 2019 National NLP Clinical Challenges (n2c2)/Open Health Natural Language Processing Consortium (OHNLP) clinical semantic textual similarity (ClinicalSTS) shared task.

OBJECTIVE : This study aims to improve the performance and robustness of semantic textual similarity in the clinical domain by leveraging manually labeled data from related tasks and contextualized embeddings from pretrained transformer-based language models.

METHODS : The ClinicalSTS data set consists of 1642 pairs of deidentified clinical text snippets annotated in a continuous scale of 0-5, indicating degrees of semantic similarity. We developed an iterative intermediate training approach using multi-task learning (IIT-MTL), a multi-task training approach that employs iterative data set selection. We applied this process to bidirectional encoder representations from transformers on clinical text mining (ClinicalBERT), a pretrained domain-specific transformer-based language model, and fine-tuned the resulting model on the target ClinicalSTS task. We incrementally ensembled the output from applying IIT-MTL on ClinicalBERT with the output of other language models (bidirectional encoder representations from transformers for biomedical text mining [BioBERT], multi-task deep neural networks [MT-DNN], and robustly optimized BERT approach [RoBERTa]) and handcrafted features using regression-based learning algorithms. On the basis of these experiments, we adopted the top-performing configurations as our official submissions.

RESULTS : Our system ranked first out of 87 submitted systems in the 2019 n2c2/OHNLP ClinicalSTS challenge, achieving state-of-the-art results with a Pearson correlation coefficient of 0.9010. This winning system was an ensembled model leveraging the output of IIT-MTL on ClinicalBERT with BioBERT, MT-DNN, and handcrafted medication features.

CONCLUSIONS : This study demonstrates that IIT-MTL is an effective way to leverage annotated data from related tasks to improve performance on a target task with a limited data set. This contribution opens new avenues of exploration for optimized data set selection to generate more robust and universal contextual representations of text in the clinical domain.

Mahajan Diwakar, Poddar Ananya, Liang Jennifer J, Lin Yen-Ting, Prager John M, Suryanarayanan Parthasarathy, Raghavan Preethi, Tsou Ching-Huei

2020-Nov-27

deep learning, electronic health records, multi-task learning, natural language processing, semantic textual similarity, transfer learning

Surgery Surgery

Predicting Unplanned Readmissions Following a Hip or Knee Arthroplasty: Retrospective Observational Study.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Total joint replacements are high-volume and high-cost procedures that should be monitored for cost and quality control. Models that can identify patients at high risk of readmission might help reduce costs by suggesting who should be enrolled in preventive care programs. Previous models for risk prediction have relied on structured data of patients rather than clinical notes in electronic health records (EHRs). The former approach requires manual feature extraction by domain experts, which may limit the applicability of these models.

OBJECTIVE : This study aims to develop and evaluate a machine learning model for predicting the risk of 30-day readmission following knee and hip arthroplasty procedures. The input data for these models come from raw EHRs. We empirically demonstrate that unstructured free-text notes contain a reasonably predictive signal for this task.

METHODS : We performed a retrospective analysis of data from 7174 patients at Partners Healthcare collected between 2006 and 2016. These data were split into train, validation, and test sets. These data sets were used to build, validate, and test models to predict unplanned readmission within 30 days of hospital discharge. The proposed models made predictions on the basis of clinical notes, obviating the need for performing manual feature extraction by domain and machine learning experts. The notes that served as model inputs were written by physicians, nurses, pathologists, and others who diagnose and treat patients and may have their own predictions, even if these are not recorded.

RESULTS : The proposed models output readmission risk scores (propensities) for each patient. The best models (as selected on a development set) yielded an area under the receiver operating characteristic curve of 0.846 (95% CI 82.75-87.11) for hip and 0.822 (95% CI 80.94-86.22) for knee surgery, indicating reasonable discriminative ability.

CONCLUSIONS : Machine learning models can predict which patients are at a high risk of readmission within 30 days following hip and knee arthroplasty procedures on the basis of notes in EHRs with reasonable discriminative power. Following further validation and empirical demonstration that the models realize predictive performance above that which clinical judgment may provide, such models may be used to build an automated decision support tool to help caretakers identify at-risk patients.

Mohammadi Ramin, Jain Sarthak, Namin Amir T, Scholem Heller Melissa, Palacholla Ramya, Kamarthi Sagar, Wallace Byron

2020-Nov-27

30-days readmission, auto ML, deep learning, electronic health records, hip arthroplasty, knee arthroplasty, natural language processing

Radiology Radiology

Artificial intelligence in musculoskeletal ultrasound imaging.

In Ultrasonography (Seoul, Korea)

Ultrasonography (US) is noninvasive and offers real-time, low-cost, and portable imaging that facilitates the rapid and dynamic assessment of musculoskeletal components. Significant technological improvements have contributed to the increasing adoption of US for musculoskeletal assessments, as artificial intelligence (AI)-based computer-aided detection and computer-aided diagnosis are being utilized to improve the quality, efficiency, and cost of US imaging. This review provides an overview of classical machine learning techniques and modern deep learning approaches for musculoskeletal US, with a focus on the key categories of detection and diagnosis of musculoskeletal disorders, predictive analysis with classification and regression, and automated image segmentation. Moreover, we outline challenges and a range of opportunities for AI in musculoskeletal US practice.

Shin YiRang, Yang Jaemoon, Lee Young Han, Kim Sungjun

2020-Sep-06

Artificial intelligence, Deep learning, Machine learning, Musculoskeletal system, Ultrasonography

Surgery Surgery

A Human-Algorithm Integration System for Hip Fracture Detection on Plain Radiography: System Development and Validation Study.

In JMIR medical informatics ; h5-index 23.0

BACKGROUND : Hip fracture is the most common type of fracture in elderly individuals. Numerous deep learning (DL) algorithms for plain pelvic radiographs (PXRs) have been applied to improve the accuracy of hip fracture diagnosis. However, their efficacy is still undetermined.

OBJECTIVE : The objective of this study is to develop and validate a human-algorithm integration (HAI) system to improve the accuracy of hip fracture diagnosis in a real clinical environment.

METHODS : The HAI system with hip fracture detection ability was developed using a deep learning algorithm trained on trauma registry data and 3605 PXRs from August 2008 to December 2016. To compare their diagnostic performance before and after HAI system assistance using an independent testing dataset, 34 physicians were recruited. We analyzed the physicians' accuracy, sensitivity, specificity, and agreement with the algorithm; we also performed subgroup analyses according to physician specialty and experience. Furthermore, we applied the HAI system in the emergency departments of different hospitals to validate its value in the real world.

RESULTS : With the support of the algorithm, which achieved 91% accuracy, the diagnostic performance of physicians was significantly improved in the independent testing dataset, as was revealed by the sensitivity (physician alone, median 95%; HAI, median 99%; P<.001), specificity (physician alone, median 90%; HAI, median 95%; P<.001), accuracy (physician alone, median 90%; HAI, median 96%; P<.001), and human-algorithm agreement [physician alone κ, median 0.69 (IQR 0.63-0.74); HAI κ, median 0.80 (IQR 0.76-0.82); P<.001. With the help of the HAI system, the primary physicians showed significant improvement in their diagnostic performance to levels comparable to those of consulting physicians, and both the experienced and less-experienced physicians benefited from the HAI system. After the HAI system had been applied in 3 departments for 5 months, 587 images were examined. The sensitivity, specificity, and accuracy of the HAI system for detecting hip fractures were 97%, 95.7%, and 96.08%, respectively.

CONCLUSIONS : HAI currently impacts health care, and integrating this technology into emergency departments is feasible. The developed HAI system can enhance physicians' hip fracture diagnostic performance.

Cheng Chi-Tung, Chen Chih-Chi, Cheng Fu-Jen, Chen Huan-Wu, Su Yi-Siang, Yeh Chun-Nan, Chung I-Fang, Liao Chien-Hung

2020-Nov-27

algorithms, artificial intelligence, computer, deep learning, diagnosis, hip fracture, human augmentation, neural network

General General

A Span-graph Neural Model for Overlapping Entity Relation Extraction in Biomedical Texts.

In Bioinformatics (Oxford, England)

MOTIVATION : Entity relation extraction is one of the fundamental tasks in biomedical text mining, which is usually solved by the models from natural language processing (NLP). Compared with traditional pipeline methods, joint methods can avoid the error propagation from entity to relation, giving better performances. However, the existing joint models are built upon sequential scheme, and fail to detect overlapping entity and relation, which are ubiquitous in biomedical texts. The main reason is that sequential models have relatively weaker power in capturing long-range dependencies, which results in lower performance in encoding longer sentences. In this paper, we propose a novel span-graph neural model for jointly extracting overlapping entity relation in biomedical texts. Our model treats the task as relation triplets prediction, and builds the entity-graph by enumerating possible candidate entity spans. The proposed model captures the relationship between the correlated entities via a span scorer and a relation scorer, respectively, and finally outputs all valid relational triplets.

RESULTS : Experimental results on two biomedical entity relation extraction tasks, including drug-drug interaction detection and protein-protein interaction detection, show that the proposed method outperforms previous models by a substantial margin, demonstrating the effectiveness of span-graph based method for overlapping relation extraction in biomedical texts. Further in-depth analysis proves that our model is more effective in capturing the long-range dependencies for relation extraction compared with the sequential models.

Fei Hao, Zhang Yue, Ren Yafeng, Ji Donghong

2020-Nov-27

General General

Morphing Projections: a new visual technique for fast and interactive large-scale analysis of biomedical datasets.

In Bioinformatics (Oxford, England)

MOTIVATION : Biomedical research entails analyzing high dimensional records of biomedical features with hundreds or thousands of samples each. This often involves using also complementary clinical metadata, as well as a broad user domain knowledge. Common data analytics software makes use of machine learning algorithms or data visualization tools. However, they are frequently one-way analyses, providing little room for the user to reconfigure the steps in light of the observed results. In other cases, reconfigurations involve large latencies, requiring a retraining of algorithms or a large pipeline of actions. The complex and multiway nature of the problem, nonetheless, suggests that user interaction feedback is a key element to boost the cognitive process of analysis, and must be both broad and fluid.

RESULTS : In this paper we present a technique for biomedical data analytics, based on blending meaningful views in an efficient manner, allowing to provide a natural smooth way to transition among different but complementary representations of data and knowledge. Our hypothesis is that the confluence of diverse complementary information from different domains on a highly interactive interface allows the user to discover relevant relationships or generate new hypotheses to be investigated by other means. We illustrate the potential of this approach with two case studies involving gene expression data and clinical metadata, as representative examples of high dimensional, multidomain, biomedical data.

AVAILABILITY AND IMPLEMENTATION : Code and demo app to reproduce the results available at https://gitlab.com/idiazblanco/morphing-projections-demo-and-dataset-preparation.

Díaz Ignacio, Enguita José M, González Ana, García Diego, Cuadrado Abel A, Chiara María D, Valdés Nuria

2020-Nov-27

oncology Oncology

Intensity non-uniformity correction in MR imaging using residual cycle generative adversarial network.

In Physics in medicine and biology

Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative magnetic resonance (MR) image analysis in daily clinical practice. Although having no severe impact on visual diagnosis, the INU can highly degrade the performance of automatic quantitative analysis such as segmentation, registration, feature extraction and radiomics. In this study, we present an advanced deep learning based INU correction algorithm called residual cycle generative adversarial network (res-cycle GAN), which integrates the residual block concept into a cycle-consistent GAN (cycle-GAN). In cycle-GAN, an inverse transformation was implemented between the INU uncorrected and corrected magnetic resonance imaging (MRI) images to constrain the model through forcing the calculation of both an INU corrected MRI and a synthetic corrected MRI. A fully convolution neural network integrating residual blocks was applied in the generator of cycle-GAN to enhance end-to-end raw MRI to INU corrected MRI transformation. A cohort of 55 abdominal patients with T1-weighted MR INU images and their corrections with a clinically established and commonly used method, namely, N4ITK were used as a pair to evaluate the proposed res-cycle GAN based INU correction algorithm. Quantitatively comparisons of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were made among the proposed method and other approaches. Our res-cycle GAN based method achieved an NMAE of 0.011 ± 0.002, a PSNR of 28.0 ± 1.9 dB, an NCC of 0.970 ± 0.017, and a SNU of 0.298 ± 0.085. Our proposed method has significant improvements (p < 0.05) in NMAE, PSNR, NCC and SNU over other algorithms including conventional GAN and U-net. Once the model is well trained, our approach can automatically generate the corrected MR images in a few minutes, eliminating the need for manual setting of parameters.

Dai Xianjin, Lei Yang, Liu Yingzi, Wang Tonghe, Ren Lei, Curran Walter J, Patel Pretesh, Liu Tian, Yang Xiaofeng

2020-Nov-27

General General

Multi-task autoencoder based classification-regression model for patient-specific VMAT QA.

In Physics in medicine and biology

Patient-specific quality assurance (PSQA) of volumetric modulated arc therapy (VMAT) to assure accurate treatment delivery is resource-intensive and time-consuming. Recently, machine learning has been increasingly investigated in PSQA results prediction. However, the classification performance of models at different criteria needs further improvement and clinical validation (CV), especially for predicting plans with low gamma passing rates (GPRs). In this study, we developed and validated a novel multi-task model called autoencoder based classification-regression (ACLR) for VMAT PSQA. The classification and regression were integrated into one model, both parts were trained alternatively while minimizing a defined loss function. The classification was used as an intermediate result to improve the regression accuracy. Different tasks of GPRs prediction and classification based on different criteria were trained simultaneously. Balanced sampling techniques were used to improve the prediction accuracy and classification sensitivity for the unbalanced VMAT plans. Fifty-four metrics were selected as inputs to describe the plan modulation-complexity and delivery-characteristics, while the outputs were PSQA GPRs. A total of 426 clinically delivered VMAT plans were used for technical validation (TV), and another 150 VMAT plans were used for CV to evaluate the generalization performance of the model. The ACLR performance was compared with the Poisson Lasso (PL) model and found significant improvement in prediction accuracy. In TV, the absolute prediction error (APE) of ACLR was 1.76%, 2.60%, and 4.66% at 3%/3 mm, 3%/2 mm, and 2%/2 mm, respectively; whereas the APE of PL was 2.10%, 3.04%, and 5.29% at 3%/3 mm, 3%/2 mm, and 2%/2 mm, respectively. No significant difference was found between CV and TV in prediction accuracy. ACLR model set with 3%/3 mm can achieve 100% sensitivity and 83% specificity. The ACLR model could classify the unbalanced VMAT QA results accurately, and it can be readily applied in clinical practice for virtual VMAT QA.

Wang Le, Li Jiaqi, Zhang Shuming, Zhang Xile, Zhang Qilin, Chan Maria F, Yang Ruijie, Sui Jing

2020-Nov-27

General General

Development and evaluation of deep learning for screening dental caries from oral photos.

In Oral diseases

OBJECTIVES : To develop and evaluate the performance of a deep learning system based on convolutional neural network (ConvNet) to detect dental caries from oral photos.

METHODS : 3932 oral photos obtained from 625 volunteers with consumer cameras were included for the development and evaluation of the model. A deep ConvNet was developed by adapting from Single Shot MultiBox Detector. The hard negative mining algorithm was applied to automatically train the model. The model was evaluated for: (i) classification accuracy for telling the existence of dental caries from a photo, and (ii) localization accuracy for locations of predicted dental caries.

RESULTS : The system exhibited a classification Area under the Curve (AUC) of 85.65% (95% Confidence Interval: 82.48% to 88.71%). The model also achieved an image-wise sensitivity of 81.90%, and a box-wise sensitivity of 64.60% at a high-sensitivity operating point. The hard negative mining algorithm significantly boosted both classification (p < 0.001) and localization (p < 0.001) performance of the model by reducing false positive predictions.

CONCLUSIONS : The deep learning model is promising to detect dental caries on oral photos captured with consumer cameras. It can be useful for enabling the preliminary and cost-effective screening of dental caries among large populations.

Zhang Xuan, Liang Yuan, Li Wen, Liu Chao, Gu Deao, Sun Weibin, Miao Leiying

2020-Nov-26

artificial intelligence, deep learning, dental caries

General General

Novel SARS-CoV-2 encoded small RNAs in the passage to humans.

In Bioinformatics (Oxford, England)

MOTIVATION : The Severe Acute Respiratory Syndrome-Coronavirus 2 (SARS-CoV-2) has recently emerged as the responsible for the pandemic outbreak of the coronavirus disease (COVID-19). This virus is closely related to coronaviruses infecting bats and Malayan pangolins, species suspected to be an intermediate host in the passage to humans. Several genomic mutations affecting viral proteins have been identified, contributing to the understanding of the recent animal-to-human transmission. However, the capacity of SARS-CoV-2 to encode functional putative microRNAs (miRNAs) remains largely unexplored.

RESULTS : We have used deep learning to discover 12 candidate stem-loop structures hidden in the viral protein-coding genome. Among the precursors, the expression of eight mature miRNAs-like sequences was confirmed in small RNA-seq data from SARS-CoV-2 infected human cells. Predicted miRNAs are likely to target a subset of human genes of which 109 are transcriptionally deregulated upon infection. Remarkably, 28 of those genes potentially targeted by SARS-CoV-2 miRNAs are down-regulated in infected human cells. Interestingly, most of them have been related to respiratory diseases and viral infection, including several afflictions previously associated with SARS-CoV-1 and SARS-CoV-2. The comparison of SARS-CoV-2 pre-miRNA sequences with those from bat and pangolin coronaviruses suggests that single nucleotide mutations could have helped its progenitors jumping inter-species boundaries, allowing the gain of novel mature miRNAs targeting human mRNAs. Our results suggest that the recent acquisition of novel miRNAs-like sequences in the SARS-CoV-2 genome may have contributed to modulate the transcriptional reprogramming of the new host upon infection.

Merino Gabriela A, Raad Jonathan, Bugnon Leandro A, Yones Cristian, Kamenetzky Laura, Claus Juan, Ariel Federico, Milone Diego H, Stegmayer Georgina

2020-Nov-27

General General

Author Correction: Forecasting risk gene discovery in autism with machine learning and genome-scale data.

In Scientific reports ; h5-index 158.0

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

Brueggeman Leo, Koomar Tanner, Michaelson Jacob J

2020-Nov-26

General General

3D printable biomimetic rod with superior buckling resistance designed by machine learning.

In Scientific reports ; h5-index 158.0

Our mother nature has been providing human beings with numerous resources to inspire from, in building a finer life. Particularly in structural design, plenteous notions are being drawn from nature in enhancing the structural capacity as well as the appearance of the structures. Here plant stems, roots and various other structures available in nature that exhibit better buckling resistance are mimicked and modeled by finite element analysis to create a training database. The finite element analysis is validated by uniaxial compression to buckling of 3D printed biomimetic rods using a polymeric ink. After feature identification, forward design and data filtering are conducted by machine learning to optimize the biomimetic rods. The results show that the machine learning designed rods have 150% better buckling resistance than all the rods in the training database, i.e., better than the nature's counterparts. It is expected that this study opens up a new opportunity to design engineering rods or columns with superior buckling resistance such as in bridges, buildings, and truss structures.

Challapalli Adithya, Li Guoqiang

2020-Nov-26

General General

Machine learning issues and opportunities in ultrafast particle classification for label-free microflow cytometry.

In Scientific reports ; h5-index 158.0

Machine learning offers promising solutions for high-throughput single-particle analysis in label-free imaging microflow cytomtery. However, the throughput of online operations such as cell sorting is often limited by the large computational cost of the image analysis while offline operations may require the storage of an exceedingly large amount of data. Moreover, the training of machine learning systems can be easily biased by slight drifts of the measurement conditions, giving rise to a significant but difficult to detect degradation of the learned operations. We propose a simple and versatile machine learning approach to perform microparticle classification at an extremely low computational cost, showing good generalization over large variations in particle position. We present proof-of-principle classification of interference patterns projected by flowing transparent PMMA microbeads with diameters of [Formula: see text] and [Formula: see text]. To this end, a simple, cheap and compact label-free microflow cytometer is employed. We also discuss in detail the detection and prevention of machine learning bias in training and testing due to slight drifts of the measurement conditions. Moreover, we investigate the implications of modifying the projected particle pattern by means of a diffraction grating, in the context of optical extreme learning machine implementations.

Lugnan Alessio, Gooskens Emmanuel, Vatin Jeremy, Dambre Joni, Bienstman Peter

2020-Nov-26

Radiology Radiology

Current status of deep learning applications in abdominal ultrasonography.

In Ultrasonography (Seoul, Korea)

Deep learning is one of the most popular artificial intelligence techniques used in the medical field. Although it is at an early stage compared to deep learning analyses of computed tomography or magnetic resonance imaging, studies applying deep learning to ultrasound imaging have been actively conducted. This review analyzes recent studies that applied deep learning to ultrasound imaging of various abdominal organs and explains the challenges encountered in these applications.

Song Kyoung Doo

2020-Sep-02

Abdominal, Deep learning, Ultrasound

General General

Novel SARS-CoV-2 encoded small RNAs in the passage to humans.

In Bioinformatics (Oxford, England)

MOTIVATION : The Severe Acute Respiratory Syndrome-Coronavirus 2 (SARS-CoV-2) has recently emerged as the responsible for the pandemic outbreak of the coronavirus disease (COVID-19). This virus is closely related to coronaviruses infecting bats and Malayan pangolins, species suspected to be an intermediate host in the passage to humans. Several genomic mutations affecting viral proteins have been identified, contributing to the understanding of the recent animal-to-human transmission. However, the capacity of SARS-CoV-2 to encode functional putative microRNAs (miRNAs) remains largely unexplored.

RESULTS : We have used deep learning to discover 12 candidate stem-loop structures hidden in the viral protein-coding genome. Among the precursors, the expression of eight mature miRNAs-like sequences was confirmed in small RNA-seq data from SARS-CoV-2 infected human cells. Predicted miRNAs are likely to target a subset of human genes of which 109 are transcriptionally deregulated upon infection. Remarkably, 28 of those genes potentially targeted by SARS-CoV-2 miRNAs are down-regulated in infected human cells. Interestingly, most of them have been related to respiratory diseases and viral infection, including several afflictions previously associated with SARS-CoV-1 and SARS-CoV-2. The comparison of SARS-CoV-2 pre-miRNA sequences with those from bat and pangolin coronaviruses suggests that single nucleotide mutations could have helped its progenitors jumping inter-species boundaries, allowing the gain of novel mature miRNAs targeting human mRNAs. Our results suggest that the recent acquisition of novel miRNAs-like sequences in the SARS-CoV-2 genome may have contributed to modulate the transcriptional reprogramming of the new host upon infection.

Merino Gabriela A, Raad Jonathan, Bugnon Leandro A, Yones Cristian, Kamenetzky Laura, Claus Juan, Ariel Federico, Milone Diego H, Stegmayer Georgina

2020-Nov-27

General General

Facial expressions contribute more than body movements to conversational outcomes in avatar-mediated virtual environments.

In Scientific reports ; h5-index 158.0

This study focuses on the individual and joint contributions of two nonverbal channels (i.e., face and upper body) in avatar mediated-virtual environments. 140 dyads were randomly assigned to communicate with each other via platforms that differentially activated or deactivated facial and bodily nonverbal cues. The availability of facial expressions had a positive effect on interpersonal outcomes. More specifically, dyads that were able to see their partner's facial movements mapped onto their avatars liked each other more, formed more accurate impressions about their partners, and described their interaction experiences more positively compared to those unable to see facial movements. However, the latter was only true when their partner's bodily gestures were also available and not when only facial movements were available. Dyads showed greater nonverbal synchrony when they could see their partner's bodily and facial movements. This study also employed machine learning to explore whether nonverbal cues could predict interpersonal attraction. These classifiers predicted high and low interpersonal attraction at an accuracy rate of 65%. These findings highlight the relative significance of facial cues compared to bodily cues on interpersonal outcomes in virtual environments and lend insight into the potential of automatically tracked nonverbal cues to predict interpersonal attitudes.

Oh Kruzic Catherine, Kruzic David, Herrera Fernanda, Bailenson Jeremy

2020-Nov-26

General General

Towards better heartbeat segmentation with deep learning classification.

In Scientific reports ; h5-index 158.0

The confidence of medical equipment is intimately related to false alarms. The higher the number of false events occurs, the less truthful is the equipment. In this sense, reducing (or suppressing) false positive alarms is hugely desirable. In this work, we propose a feasible and real-time approach that works as a validation method for a heartbeat segmentation third-party algorithm. The approach is based on convolutional neural networks (CNNs), which may be embedded in dedicated hardware. Our proposal aims to detect the pattern of a single heartbeat and classifies them into two classes: a heartbeat and not a heartbeat. For this, a seven-layer convolution network is employed for both data representation and classification. We evaluate our approach in two well-settled databases in the literature on the raw heartbeat signal. The first database is a conventional on-the-person database called MIT-BIH, and the second is one less uncontrolled off-the-person type database known as CYBHi. To evaluate the feasibility and the performance of the proposed approach, we use as a baseline the Pam-Tompkins algorithm, which is a well-known method in the literature and still used in the industry. We compare the baseline against the proposed approach: a CNN model validating the heartbeats detected by a third-party algorithm. In this work, the third-party algorithm is the same as the baseline for comparison purposes. The results support the feasibility of our approach showing that our method can enhance the positive prediction of the Pan-Tompkins algorithm from [Formula: see text]/[Formula: see text] to [Formula: see text]/[Formula: see text] by slightly decreasing the sensitivity from [Formula: see text]/[Formula: see text] to [Formula: see text] [Formula: see text] on the MIT-BIH/CYBHi databases.

Silva Pedro, Luz Eduardo, Silva Guilherme, Moreira Gladston, Wanner Elizabeth, Vidal Flavio, Menotti David

2020-Nov-26

General General

An efficient machine learning-based approach for screening individuals at risk of hereditary haemochromatosis.

In Scientific reports ; h5-index 158.0

Hereditary haemochromatosis (HH) is an autosomal recessive disease, where HFE C282Y homozygosity accounts for 80-85% of clinical cases among the Caucasian population. HH is characterised by the accumulation of iron, which, if untreated, can lead to the development of liver cirrhosis and liver cancer. Since iron overload is preventable and treatable if diagnosed early, high-risk individuals can be identified through effective screening employing artificial intelligence-based approaches. However, such tools expose novel challenges associated with the handling and integration of large heterogeneous datasets. We have developed an efficient computational model to screen individuals for HH using the family study data of the Hemochromatosis and Iron Overload Screening (HEIRS) cohort. This dataset, consisting of 254 cases and 701 controls, contains variables extracted from questionnaires and laboratory blood tests. The final model was trained on an extreme gradient boosting classifier using the most relevant risk factors: HFE C282Y homozygosity, age, mean corpuscular volume, iron level, serum ferritin level, transferrin saturation, and unsaturated iron-binding capacity. Hyperparameter optimisation was carried out with multiple runs, resulting in 0.94 ± 0.02 area under the receiving operating characteristic curve (AUCROC) for tenfold stratified cross-validation, demonstrating its outperformance when compared to the iron overload screening (IRON) tool.

Martins Conde Patricia, Sauter Thomas, Nguyen Thanh-Phuong

2020-Nov-26

Surgery Surgery

Rethinking glottal midline detection.

In Scientific reports ; h5-index 158.0

A healthy voice is crucial for verbal communication and hence in daily as well as professional life. The basis for a healthy voice are the sound producing vocal folds in the larynx. A hallmark of healthy vocal fold oscillation is the symmetric motion of the left and right vocal fold. Clinically, videoendoscopy is applied to assess the symmetry of the oscillation and evaluated subjectively. High-speed videoendoscopy, an emerging method that allows quantification of the vocal fold oscillation, is more commonly employed in research due to the amount of data and the complex, semi-automatic analysis. In this study, we provide a comprehensive evaluation of methods that detect fully automatically the glottal midline. We used a biophysical model to simulate different vocal fold oscillations, extended the openly available BAGLS dataset using manual annotations, utilized both, simulations and annotated endoscopic images, to train deep neural networks at different stages of the analysis workflow, and compared these to established computer vision algorithms. We found that classical computer vision perform well on detecting the glottal midline in glottis segmentation data, but are outperformed by deep neural networks on this task. We further suggest GlottisNet, a multi-task neural architecture featuring the simultaneous prediction of both, the opening between the vocal folds and the symmetry axis, leading to a huge step forward towards clinical applicability of quantitative, deep learning-assisted laryngeal endoscopy, by fully automating segmentation and midline detection.

Kist Andreas M, Zilker Julian, Gómez Pablo, Schützenberger Anne, Döllinger Michael

2020-Nov-26

oncology Oncology

A deep learning diagnostic platform for diffuse large B-cell lymphoma with high accuracy across multiple hospitals.

In Nature communications ; h5-index 260.0

Diagnostic histopathology is a gold standard for diagnosing hematopoietic malignancies. Pathologic diagnosis requires labor-intensive reading of a large number of tissue slides with high diagnostic accuracy equal or close to 100 percent to guide treatment options, but this requirement is difficult to meet. Although artificial intelligence (AI) helps to reduce the labor of reading pathologic slides, diagnostic accuracy has not reached a clinically usable level. Establishment of an AI model often demands big datasets and an ability to handle large variations in sample preparation and image collection. Here, we establish a highly accurate deep learning platform, consisting of multiple convolutional neural networks, to classify pathologic images by using smaller datasets. We analyze human diffuse large B-cell lymphoma (DLBCL) and non-DLBCL pathologic images from three hospitals separately using AI models, and obtain a diagnostic rate of close to 100 percent (100% for hospital A, 99.71% for hospital B and 100% for hospital C). The technical variability introduced by slide preparation and image collection reduces AI model performance in cross-hospital tests, but the 100% diagnostic accuracy is maintained after its elimination. It is now clinically practical to utilize deep learning models for diagnosis of DLBCL and ultimately other human hematopoietic malignancies.

Li Dongguang, Bledsoe Jacob R, Zeng Yu, Liu Wei, Hu Yiguo, Bi Ke, Liang Aibin, Li Shaoguang

2020-Nov-26

General General

Selecting the most important self-assessed features for predicting conversion to mild cognitive impairment with random forest and permutation-based methods.

In Scientific reports ; h5-index 158.0

Alzheimer's Disease is a complex, multifactorial, and comorbid condition. The asymptomatic behavior in the early stages makes the identification of the disease onset particularly challenging. Mild cognitive impairment (MCI) is an intermediary stage between the expected decline of normal aging and the pathological decline associated with dementia. The identification of risk factors for MCI is thus sorely needed. Self-reported personal information such as age, education, income level, sleep, diet, physical exercise, etc. is called to play a key role not only in the early identification of MCI but also in the design of personalized interventions and the promotion of patients empowerment. In this study, we leverage a large longitudinal study on healthy aging in Spain, to identify the most important self-reported features for future conversion to MCI. Using machine learning (random forest) and permutation-based methods we select the set of most important self-reported variables for MCI conversion which includes among others, subjective cognitive decline, educational level, working experience, social life, and diet. Subjective cognitive decline stands as the most important feature for future conversion to MCI across different feature selection techniques.

Gómez-Ramírez Jaime, Ávila-Villanueva Marina, Fernández-Blázquez Miguel Ángel

2020-Nov-26

General General

Training confounder-free deep learning models for medical applications.

In Nature communications ; h5-index 260.0

The presence of confounding effects (or biases) is one of the most critical challenges in using deep learning to advance discovery in medical imaging studies. Confounders affect the relationship between input data (e.g., brain MRIs) and output variables (e.g., diagnosis). Improper modeling of those relationships often results in spurious and biased associations. Traditional machine learning and statistical models minimize the impact of confounders by, for example, matching data sets, stratifying data, or residualizing imaging measurements. Alternative strategies are needed for state-of-the-art deep learning models that use end-to-end training to automatically extract informative features from large set of images. In this article, we introduce an end-to-end approach for deriving features invariant to confounding factors while accounting for intrinsic correlations between the confounder(s) and prediction outcome. The method does so by exploiting concepts from traditional statistical methods and recent fair machine learning schemes. We evaluate the method on predicting the diagnosis of HIV solely from Magnetic Resonance Images (MRIs), identifying morphological sex differences in adolescence from those of the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), and determining the bone age from X-ray images of children. The results show that our method can accurately predict while reducing biases associated with confounders. The code is available at https://github.com/qingyuzhao/br-net .

Zhao Qingyu, Adeli Ehsan, Pohl Kilian M

2020-Nov-26

General General

Machine learning for suicide risk prediction in children and adolescents with electronic health records.

In Translational psychiatry ; h5-index 60.0

Accurate prediction of suicide risk among children and adolescents within an actionable time frame is an important but challenging task. Very few studies have comprehensively considered the clinical risk factors available to produce quantifiable risk scores for estimation of short- and long-term suicide risk for pediatric population. In this paper, we built machine learning models for predicting suicidal behavior among children and adolescents based on their longitudinal clinical records, and determining short- and long-term risk factors. This retrospective study used deidentified structured electronic health records (EHR) from the Connecticut Children's Medical Center covering the period from 1 October 2011 to 30 September 2016. Clinical records of 41,721 young patients (10-18 years old) were included for analysis. Candidate predictors included demographics, diagnosis, laboratory tests, and medications. Different prediction windows ranging from 0 to 365 days were adopted. For each prediction window, candidate predictors were first screened by univariate statistical tests, and then a predictive model was built via a sequential forward feature selection procedure. We grouped the selected predictors and estimated their contributions to risk prediction at different prediction window lengths. The developed predictive models predicted suicidal behavior across all prediction windows with AUCs varying from 0.81 to 0.86. For all prediction windows, the models detected 53-62% of suicide-positive subjects with 90% specificity. The models performed better with shorter prediction windows and predictor importance varied across prediction windows, illustrating short- and long-term risks. Our findings demonstrated that routinely collected EHRs can be used to create accurate predictive models for suicide risk among children and adolescents.

Su Chang, Aseltine Robert, Doshi Riddhi, Chen Kun, Rogers Steven C, Wang Fei

2020-Nov-26

Radiology Radiology

Artificial Intelligence and Acute Stroke Imaging.

In AJNR. American journal of neuroradiology

Artificial intelligence technology is a rapidly expanding field with many applications in acute stroke imaging, including ischemic and hemorrhage subtypes. Early identification of acute stroke is critical for initiating prompt intervention to reduce morbidity and mortality. Artificial intelligence can help with various aspects of the stroke treatment paradigm, including infarct or hemorrhage detection, segmentation, classification, large vessel occlusion detection, Alberta Stroke Program Early CT Score grading, and prognostication. In particular, emerging artificial intelligence techniques such as convolutional neural networks show promise in performing these imaging-based tasks efficiently and accurately. The purpose of this review is twofold: first, to describe AI methods and available public and commercial platforms in stroke imaging, and second, to summarize the literature of current artificial intelligence-driven applications for acute stroke triage, surveillance, and prediction.

Soun J E, Chow D S, Nagamine M, Takhtawala R S, Filippi C G, Yu W, Chang P D

2020-Nov-26

Radiology Radiology

Development and validation of a deep learning algorithm detecting 10 common abnormalities on chest radiographs.

In The European respiratory journal

We aimed to develop a deep-learning algorithm detecting 10 common abnormalities (DLAD-10) on chest radiographs and to evaluate its impact in diagnostic accuracy, timeliness of reporting, and workflow efficacy.DLAD-10 was trained with 146 717 radiographs from 108 053 patients using a ResNet34-based neural network with lesion-specific channels for 10 common radiologic abnormalities (pneumothorax, mediastinal widening, pneumoperitoneum, nodule/mass, consolidation, pleural effusion, linear atelectasis, fibrosis, calcification, and cardiomegaly). For external validation, the performance of DLAD-10 on a same-day CT-confirmed dataset (normal:abnormal, 53:147) and an open-source dataset (PadChest; normal:abnormal, 339:334) was compared to that of three radiologists. Separate simulated reading tests were conducted on another dataset adjusted to real-world disease prevalence in the emergency department, consisting of four critical, 52 urgent, and 146 non-urgent cases. Six radiologists participated in the simulated reading sessions with and without DLAD-10.DLAD-10 exhibited areas under the receiver-operating characteristic curves (AUROCs) of 0.895-1.00 in the CT-confirmed dataset and 0.913-0.997 in the PadChest dataset. DLAD-10 correctly classified significantly more critical abnormalities (95.0% [57/60]) than pooled radiologists (84.4% [152/180]; p=0.01). In simulated reading tests for emergency department patients, pooled readers detected significantly more critical (70.8% [17/24] versus 29.2% [7/24]; p=0.006) and urgent (82.7% [258/312] versus 78.2% [244/312]; p=0.04) abnormalities when aided by DLAD-10. DLAD-10 assistance shortened the mean time-to-report critical and urgent radiographs (640.5±466.3 versus 3371.0±1352.5 s and 1840.3±1141.1 versus 2127.1±1468.2, respectively; p-values<0.01) and reduced the mean interpretation time (20.5±22.8 versus 23.5±23.7 s; p<0.001).DLAD-10 showed excellent performance, improving radiologists' performance and shortening the reporting time for critical and urgent cases.

Nam Ju Gang, Kim Minchul, Park Jongchan, Hwang Eui Jin, Lee Jong Hyuk, Hong Jung Hee, Goo Jin Mo, Park Chang Min

2020-Nov-26

Surgery Surgery

Noncoding RNAs in subchondral bone osteoclast function and their therapeutic potential for osteoarthritis.

In Arthritis research & therapy ; h5-index 60.0

Osteoclasts are the only cells that perform bone resorption. Noncoding RNAs (ncRNAs) are crucial epigenetic regulators of osteoclast biological behaviors ranging from osteoclast differentiation to bone resorption. The main ncRNAs, including miRNAs, circRNAs, and lncRNAs, compose an intricate network that influences gene transcription processes related to osteoclast biological activity. Accumulating evidence suggests that abnormal osteoclast activity leads to the disturbance of subchondral bone remodeling, thus initiating osteoarthritis (OA), a prevalent joint disease characterized mainly by cartilage degradation and subchondral bone remodeling imbalance. In this review, we delineate three types of ncRNAs and discuss their related complex molecular signaling pathways associated with osteoclast function during bone resorption. We specifically focused on the involvement of noncoding RNAs in subchondral bone remodeling, which participate in the degradation of the osteochondral unit during OA progression. We also discussed exosomes as ncRNA carriers during the bone remodeling process. A better understanding of the roles of ncRNAs in osteoclast biological behaviors will contribute to the treatment of bone resorption-related skeletal diseases such as OA.

Duan Li, Liang Yujie, Xu Xiao, Wang Jifeng, Li Xingfu, Sun Deshun, Deng Zhiqin, Li Wencui, Wang Daping

2020-Nov-25

Noncoding RNAs, Osteoarthritis, Osteoclasts, Subchondral bone remodeling

Radiology Radiology

Combined the SMAC mimetic and BCL2 inhibitor sensitizes neoadjuvant chemotherapy by targeting necrosome complexes in tyrosine aminoacyl-tRNA synthase-positive breast cancer.

In Breast cancer research : BCR

BACKGROUND : Chemotherapy is the standard treatment for breast cancer; however, the response to chemotherapy is disappointingly low. Here, we investigated the alternative therapeutic efficacy of novel combination treatment with necroptosis-inducing small molecules to overcome chemotherapeutic resistance in tyrosine aminoacyl-tRNA synthetase (YARS)-positive breast cancer.

METHODS : Pre-chemotherapeutic needle biopsy of 143 invasive ductal carcinomas undergoing the same chemotherapeutic regimen was subjected to proteomic analysis. Four different machine learning algorithms were employed to determine signature protein combinations. Immunoreactive markers were selected using three common candidate proteins from the machine-learning algorithms and verified by immunohistochemistry using 123 cases of independent needle biopsy FFPE samples. The regulation of chemotherapeutic response and necroptotic cell death was assessed using lentiviral YARS overexpression and depletion 3D spheroid formation assay, viability assays, LDH release assay, flow cytometry analysis, and transmission electron microscopy. The ROS-induced metabolic dysregulation and phosphorylation of necrosome complex by YARS were assessed using oxygen consumption rate analysis, flow cytometry analysis, and 3D cell viability assay. The therapeutic roles of SMAC mimetics (LCL161) and a pan-BCL2 inhibitor (ABT-263) were determined by 3D cell viability assay and flow cytometry analysis. Additional biologic process and protein-protein interaction pathway analysis were performed using Gene Ontology annotation and Cytoscape databases.

RESULTS : YARS was selected as a potential biomarker by proteomics-based machine-learning algorithms and was exclusively associated with good response to chemotherapy by subsequent immunohistochemical validation. In 3D spheroid models of breast cancer cell lines, YARS overexpression significantly improved chemotherapy response via phosphorylation of the necrosome complex. YARS-induced necroptosis sequentially mediated mitochondrial dysfunction through the overproduction of ROS in breast cancer cell lines. Combination treatment with necroptosis-inducing small molecules, including a SMAC mimetic (LCL161) and a pan-BCL2 inhibitor (ABT-263), showed therapeutic efficacy in YARS-overexpressing breast cancer cells.

CONCLUSIONS : Our results indicate that, before chemotherapy, an initial screening of YARS protein expression should be performed, and YARS-positive breast cancer patients might consider the combined treatment with LCL161 and ABT-263; this could be a novel stepwise clinical approach to apply new targeted therapy in breast cancer patients in the future.

Lee Kyung-Min, Lee Hyebin, Han Dohyun, Moon Woo Kyung, Kim Kwangsoo, Oh Hyeon Jeong, Choi Jinwoo, Hwang Eun Hye, Kang Seong Eun, Im Seock-Ah, Lee Kyung-Hun, Ryu Han Suk

2020-Nov-25

BCL2 inhibitor, Breast cancer, Necroptosis, SMAC mimetic, Tyrosine aminoacyl-tRNA synthetase (YARS)

Radiology Radiology

Machine-learning classification of texture features of portable chest X-ray accurately classifies COVID-19 lung infection.

In Biomedical engineering online

BACKGROUND : The large volume and suboptimal image quality of portable chest X-rays (CXRs) as a result of the COVID-19 pandemic could post significant challenges for radiologists and frontline physicians. Deep-learning artificial intelligent (AI) methods have the potential to help improve diagnostic efficiency and accuracy for reading portable CXRs.

PURPOSE : The study aimed at developing an AI imaging analysis tool to classify COVID-19 lung infection based on portable CXRs.

MATERIALS AND METHODS : Public datasets of COVID-19 (N = 130), bacterial pneumonia (N = 145), non-COVID-19 viral pneumonia (N = 145), and normal (N = 138) CXRs were analyzed. Texture and morphological features were extracted. Five supervised machine-learning AI algorithms were used to classify COVID-19 from other conditions. Two-class and multi-class classification were performed. Statistical analysis was done using unpaired two-tailed t tests with unequal variance between groups. Performance of classification models used the receiver-operating characteristic (ROC) curve analysis.

RESULTS : For the two-class classification, the accuracy, sensitivity and specificity were, respectively, 100%, 100%, and 100% for COVID-19 vs normal; 96.34%, 95.35% and 97.44% for COVID-19 vs bacterial pneumonia; and 97.56%, 97.44% and 97.67% for COVID-19 vs non-COVID-19 viral pneumonia. For the multi-class classification, the combined accuracy and AUC were 79.52% and 0.87, respectively.

CONCLUSION : AI classification of texture and morphological features of portable CXRs accurately distinguishes COVID-19 lung infection in patients in multi-class datasets. Deep-learning methods have the potential to improve diagnostic efficiency and accuracy for portable CXRs.

Hussain Lal, Nguyen Tony, Li Haifang, Abbasi Adeel A, Lone Kashif J, Zhao Zirun, Zaib Mahnoor, Chen Anne, Duong Tim Q

2020-Nov-25

COVID-19, Classification, Feature extraction, Machine learning, Morphological, Texture

Radiology Radiology

The International Radiomics Platform - An Initiative of the German and Austrian Radiological Societies.

In RoFo : Fortschritte auf dem Gebiete der Rontgenstrahlen und der Nuklearmedizin

PURPOSE :  The DRG-ÖRG IRP (Deutsche Röntgengesellschaft-Österreichische Röntgengesellschaft international radiomics platform) represents a web-/cloud-based radiomics platform based on a public-private partnership. It offers the possibility of data sharing, annotation, validation and certification in the field of artificial intelligence, radiomics analysis, and integrated diagnostics. In a first proof-of-concept study, automated myocardial segmentation and automated myocardial late gadolinum enhancement (LGE) detection using radiomic image features will be evaluated for myocarditis data sets.

MATERIALS AND METHODS :  The DRG-ÖRP IRP can be used to create quality-assured, structured image data in combination with clinical data and subsequent integrated data analysis and is characterized by the following performance criteria: Possibility of using multicentric networked data, automatically calculated quality parameters, processing of annotation tasks, contour recognition using conventional and artificial intelligence methods and the possibility of targeted integration of algorithms. In a first study, a neural network pre-trained using cardiac CINE data sets was evaluated for segmentation of PSIR data sets. In a second step, radiomic features were applied for segmental detection of LGE of the same data sets, which were provided multicenter via the IRP.

RESULTS :  First results show the advantages (data transparency, reliability, broad involvement of all members, continuous evolution as well as validation and certification) of this platform-based approach. In the proof-of-concept study, the neural network demonstrated a Dice coefficient of 0.813 compared to the expert's segmentation of the myocardium. In the segment-based myocardial LGE detection, the AUC was 0.73 and 0.79 after exclusion of segments with uncertain annotation.The evaluation and provision of the data takes place at the IRP, taking into account the FAT (fairness, accountability, transparency) and FAIR (findable, accessible, interoperable, reusable) criteria.

CONCLUSION :  It could be shown that the DRG-ÖRP IRP can be used as a crystallization point for the generation of further individual and joint projects. The execution of quantitative analyses with artificial intelligence methods is greatly facilitated by the platform approach of the DRG-ÖRP IRP, since pre-trained neural networks can be integrated and scientific groups can be networked.In a first proof-of-concept study on automated segmentation of the myocardium and automated myocardial LGE detection, these advantages were successfully applied.Our study shows that with the DRG-ÖRP IRP, strategic goals can be implemented in an interdisciplinary way, that concrete proof-of-concept examples can be demonstrated, and that a large number of individual and joint projects can be realized in a participatory way involving all groups.

KEY POINTS :   · The DRG-ÖRG IRP is a web/cloud-based radiomics platform based on a public-private partnership.. · The DRG-ÖRG IRP can be used for the creation of quality-assured, structured image data in combination with clinical data and subsequent integrated data analysis.. · First results show the applicability of left ventricular myocardial segmentation using a neural network and segment-based LGE detection using radiomic image features.. · The DRG-ÖRG IRP offers the possibility of integrating pre-trained neural networks and networking of scientific groups..

CITATION FORMAT : · Overhoff D, Kohlmann P, Frydrychowicz A et al. The International Radiomics Platform - An Initiative of the German and Austrian Radiological Societies. Fortschr Röntgenstr 2020; DOI: 10.1055/a-1244-2775.

Overhoff Daniel, Kohlmann Peter, Frydrychowicz Alex, Gatidis Sergios, Loewe Christian, Moltz Jan, Kuhnigk Jan-Martin, Gutberlet Matthias, Winter H, Völker Martin, Hahn Horst, Schoenberg Stefan O

2020-Nov-26

General General

Ontology-based Precision Vaccinology for Deep Mechanism Understanding and Precision Vaccine Development.

In Current pharmaceutical design ; h5-index 57.0

Vaccination is one of the most important innovations in human history. It has also become a hot research area in a new application - the development of new vaccines against non-infectious diseases such as cancers. However, effective and safe vaccines still do not exist for many diseases, and where vaccines to exist, their protective immune mechanisms are often unclear. Although licensed vaccines are generally safe, various adverse events, and sometimes severe adverse events, still exist for a small population. Precision medicine tailors medical intervention to the personal characteristics of individual patients or sub-populations of individuals with similar immunity-related characteristics. Precision vaccinology is a new strategy that applies precision medicine to the development, administration, and postadministration analysis of vaccines. Several conditions contribute to make this the right time to embark on the development of precision vaccinology. First, the increased level of research in vaccinology has generated voluminous "big data" repositories of vaccinology data. Secondly, new technologies such as multi-omics and immunoinformatics bring new methods for investigating vaccines and immunology. Finally, the advent of AI and machine learning software now make possible the marriage of Big Data to the development of new vaccines in ways not possible before. However, something is missing in this marriage, and that is a common language that facilitates the correlation, analysis, and reporting nomenclature for the field of vaccinology. Solving this bioinformatics problem is the domain of applied biomedical ontology. Ontology in the informatics field is human- and machine-interpretable representation of entities and the relations among entities in a specific domain. The Vaccine Ontology (VO) and Ontology of Vaccine Adverse Events (OVAE) have been developed to support the standard representation of vaccines, vaccine components, vaccinations, host responses, and vaccine adverse events. Many other biomedical ontologies have also been developed and can be applied in vaccine research. Here, we review the current status of precision vaccinology and how ontological development will enhance this field, and propose an ontology-based precision vaccinology strategy to support precision vaccine research and development.

Xie Jiangan, Zi Wenrui, Li Zhangyong, He Yongqun

2020-Nov-24

Adverse event, Ontology, Ontology of\nadverse events.\n, Precision vaccine, Precision vaccinology, Vaccine, Vaccine ontology

General General

Impact of referencing scheme on decoding performance of LFP-based brain-machine interface.

In Journal of neural engineering ; h5-index 52.0

OBJECTIVE : There has recently been an increasing interest in local field potential (LFP) for brain-machine interface (BMI) applications due to its desirable properties (signal stability and low bandwidth). LFP is typically recorded with respect to a single unipolar reference which is susceptible to common noise. Several referencing schemes have been proposed to eliminate the common noise, such as bipolar reference, current source density (CSD), and common average reference (CAR). However, to date, there have not been any studies to investigate the impact of these referencing schemes on decoding performance of LFP-based BMIs.

APPROACH : To address this issue, we comprehensively examined the impact of different referencing schemes and LFP features on the performance of hand kinematic decoding using a deep learning method. We used LFPs chronically recorded from the motor cortex area of a monkey while performing reaching tasks.

MAIN RESULTS : Experimental results revealed that local motor potential (LMP) emerged as the most informative feature regardless of the referencing schemes. Using LMP as the feature, CAR was found to yield consistently better decoding performance than other referencing schemes over long-term recording sessions. Significance Overall, our results suggest the potential use of LMP coupled with CAR for enhancing the decoding performance of LFP-based BMIs.

Ahmadi Nur, Constandinou Timothy, Bouganis Christos-Savvas

2020-Nov-26

brain-machine interface, common average reference, deep learning, local field potential, local motor potential, neural decoding, referencing scheme

General General

Accuracy of machine learning-based prediction of medication adherence in clinical research.

In Psychiatry research ; h5-index 64.0

Medication non-adherence represents a significant barrier to treatment efficacy. Remote, real-time measurement of medication dosing can facilitate dynamic prediction of risk for medication non-adherence, which in-turn allows for proactive clinical intervention to optimize health outcomes. We examine the accuracy of dynamic prediction of non-adherence using data from remote real-time measurements of medication dosing. Participants across a large set of clinical trials (n = 4,182) were observed via a smartphone application that video records patients taking their prescribed medication. The patients' primary diagnosis, demographics, and prior indication of observed adherence/non-adherence were utilized to predict (1) adherence rates ≥ 80% across the clinical trial, (2) adherence ≥ 80% for the subsequent week, and (3) adherence the subsequent day using machine learning-based classification models. Empirically observed adherence was demonstrated to be the strongest predictor of future adherence/non-adherence. Collectively, the classification models accurately predicted adherence across the trial (AUC = 0.83), the subsequent week (AUC = 0.87) and the subsequent day (AUC = 0.87). Real-time measurement of dosing can be utilized to dynamically predict medication adherence with high accuracy.

Koesmahargyo Vidya, Abbas Anzar, Zhang Li, Guan Lei, Feng Shaolei, Yadav Vijay, Galatzer-Levy Isaac R

2020-Nov-04

Machine learning, Medication adherence, Predictive model, Psychiatric disorders

Public Health Public Health

Whether the weather will help us weather the COVID-19 pandemic: Using machine learning to measure twitter users' perceptions.

In International journal of medical informatics ; h5-index 49.0

OBJECTIVE : The potential ability for weather to affect SARS-CoV-2 transmission has been an area of controversial discussion during the COVID-19 pandemic. Individuals' perceptions of the impact of weather can inform their adherence to public health guidelines; however, there is no measure of their perceptions. We quantified Twitter users' perceptions of the effect of weather and analyzed how they evolved with respect to real-world events and time.

MATERIALS AND METHODS : We collected 166,005 English tweets posted between January 23 and June 22, 2020 and employed machine learning/natural language processing techniques to filter for relevant tweets, classify them by the type of effect they claimed, and identify topics of discussion.

RESULTS : We identified 28,555 relevant tweets and estimate that 40.4 % indicate uncertainty about weather's impact, 33.5 % indicate no effect, and 26.1 % indicate some effect. We tracked changes in these proportions over time. Topic modeling revealed major latent areas of discussion.

DISCUSSION : There is no consensus among the public for weather's potential impact. Earlier months were characterized by tweets that were uncertain of weather's effect or claimed no effect; later, the portion of tweets claiming some effect of weather increased. Tweets claiming no effect of weather comprised the largest class by June. Major topics of discussion included comparisons to influenza's seasonality, President Trump's comments on weather's effect, and social distancing.

CONCLUSION : We exhibit a research approach that is effective in measuring population perceptions and identifying misconceptions, which can inform public health communications.

Gupta Marichi, Bansal Aditya, Jain Bhav, Rochelle Jillian, Oak Atharv, Jalali Mohammad S

2020-Nov-10

Individuals’ perceptions, Machine learning, Opinion mining, SARS-CoV-2 transmission, Topic modeling

Cardiology Cardiology

Artificial Intelligence in Cardiology.

In Trends in cardiovascular medicine

This review examines the current state and application of artificial intelligence (AI) and machine learning (ML) in cardiovascular medicine. AI is changing the clinical practice of medicine in other specialties. With progress continuing in this emerging technology, the impact for cardiovascular medicine is highlighted to provide insight for the practicing clinician and to identify potential patient benefits.

Itchhaporia Dipti

2020-Nov-23

Artificial Intelligence, Cardiology, Cardiovascular Medicine, Clinical Decision Making, Machine Learning

General General

Rodent and fly models in behavioral neuroscience: an evaluation of methodological advances, comparative research, and future perspectives.

In Neuroscience and biobehavioral reviews

The assessment of behavioral outcomes is a central component of neuroscientific research, which has required continuous technological innovations to produce more detailed and reliable findings. In this article, we provide an in-depth review on the progress and future implications for three model organisms (mouse, rat, and Drosophila) essential to our current understanding of behavior. By compiling a comprehensive catalog of popular assays, we are able to compare the diversity of tasks and usage of these animal models in behavioral research. This compilation also allows for the evaluation of existing state-of-the-art methods and experimental applications, including optogenetics, machine learning, and high-throughput behavioral assays. We go on to discuss novel apparatuses and inter-species analyses for centrophobism, feeding behavior, aggression and mating paradigms, with the goal of providing a unique view on comparative behavioral research. The challenges and recent advances are evaluated in terms of their translational value, ethical procedures, and trustworthiness for behavioral research.

Moulin Thiago C, Covill Laura E, Itskov Pavel M, Williams Michael J, Schiöth Helgi B

2020-Nov-23

3Rs, Behavioral tests, aggression, animal ethics, animal models, anxiety, artificial intelligence, centrophobism, closed-loop feedback optogenetics, feeding, mating, reproducibility, translational research

General General

Biomarkers of drug-induced liver injury: a mechanistic perspective through acetaminophen hepatotoxicity.

In Expert review of gastroenterology & hepatology

Introduction: Liver injury induced by drugs is a serious clinical problem. Many circulating biomarkers for identifying and predicting drug-induced liver injury (DILI) have been proposed. Areas covered: Biomarkers are mainly predicated on the mechanistic understanding of the underlying DILI, often in the context of acetaminophen overdose. New panels of biomarkers have emerged that are related to recovery/regeneration rather than injury following DILI. We explore the clinical relevance and limitations of these new biomarkers including recent controversies. Extracellular vesicles have also emerged as a promising vector of biomarkers, although the biological role for EVs may limit their clinical usefulness. New technological approaches for biomarker discovery are also explored. Expert Opinion: Recent clinical studies have validated the efficacy of some of these new biomarkers, cytokeratin-18, macrophage colony stimulating factor receptor, and osteopontin for DILI prognosis. Low prevalence of DILI is an inherent limitation to DILI biomarker development. Furthering mechanistic understanding of DILI and leveraging technological advances (e.g. machine learning and omics approaches) is necessary to improve upon the newest generation of biomarkers. The integration of omics approaches with machine learning has led to novel insights in cancer research and DILI research is poised to leverage these technologies for biomarker discovery and development.

Umbaugh David S, Jaeschke Hartmut

2020-Nov-26

Biomarker, acetaminophen, cytokeratin-18, drug-induced liver injury, extracellular vesicles, machine learning, macrophage colony stimulating factor receptor, osteopontin

General General

iPhosS(Deep)-PseAAC: Identify Phosphoserine Sites in Proteins using Deep Learning on General Pseudo Amino Acid Compositions via Modified 5-Steps Rule.

In IEEE/ACM transactions on computational biology and bioinformatics

Among all the PTMs, the protein phosphorylation is pivotal for various pathological and physiological processes. About 30% of eukaryotic proteins undergo the phosphorylation modification, leading to various changes in conformation, function, stability, localization, and so forth. In eukaryotic proteins, phosphorylation occurs on serine (S), Threonine (T) and Tyrosine (Y) residues. Among these all, serine phosphorylation has its own importance as it is associated with various important biological processes, including energy metabolism, signal transduction pathways, cell cycling, and apoptosis. Thus, its identification is important, however, the in vitro, ex vivo and in vivo identification can be laborious, time-taking and costly. There is a dire need of an efficient and accurate computational model to help researchers and biologists identifying these sites, in an easy manner. Herein, we propose a novel predictor for identification of Phosphoserine sites (PhosS) in proteins, by integrating the Chou's Pseudo Amino Acid Composition (PseAAC) with deep features. We used well-known DNNs for both the tasks of learning a feature representation of peptide sequences and performing classifications. Among different DNNs, the best score is shown by Convolutional Neural Network-based model which renders CNN based prediction model the best for Phosphoserine prediction.

Naseer Sheraz, Hussain Waqar, Khan Yaser Daanial, Rasool Nouman

2020-Nov-26

General General

Ratio-and-Scale-Aware YOLO for Pedestrian Detection.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

Current deep learning methods seldom consider the effects of small pedestrian ratios and considerable differences in the aspect ratio of input images, which results in low pedestrian detection performance. This study proposes the ratio-and-scale-aware YOLO (RSA-YOLO) method to solve the aforementioned problems. The following procedure is adopted in this method. First, ratio-aware mechanisms are introduced to dynamically adjust the input layer length and width hyperparameters of YOLOv3, thereby solving the problem of considerable differences in the aspect ratio. Second, intelligent splits are used to automatically and appropriately divide the original images into two local images. Ratio-aware YOLO (RA-YOLO) is iteratively performed on the two local images. Because the original and local images produce low-and high-resolution pedestrian detection information after RA-YOLO, respectively, this study proposes new scale-aware mechanisms in which multiresolution fusion is used to solve the problem of misdetection of remarkably small pedestrians in images. The experimental results indicate that the proposed method produces favorable results for images with extremely small objects and those with considerable differences in the aspect ratio. Compared with the original YOLOs (i.e., YOLOv2 and YOLOv3) and several state-of-the-art approaches, the proposed method demonstrated a superior performance for the VOC 2012 comp4, INRIA, and ETH databases in terms of the average precision, intersection over union, and lowest log-average miss rate.

Hsu Wei-Yen, Lin Wen-Yen

2020-Nov-26

Radiology Radiology

A study of using a deep learning image reconstruction to improve the image quality of extremely low dose contrast-enhanced abdominal CT for patients with hepatic lesions.

In The British journal of radiology

OBJECTIVE : To investigate the feasibility of using deep learning image reconstruction (DLIR) to significantly reduce radiation dose and improve image quality in contrast-enhanced abdominal CT.

METHODS : This was a prospective study. 40 patients with hepatic lesions underwent abdominal CT using routine dose (120kV, noise index (NI) setting of 11 with automatic tube current modulation) in the arterial-phase (AP) and portal-phase (PP), and low dose (NI = 24) in the delayed-phase (DP). All images were reconstructed at 1.25 mm thickness using ASIR-V at 50% strength. In addition, images in DP were reconstructed using DLIR in high setting (DLIR-H). The CT value and standard deviation (SD) of hepatic parenchyma, spleen, paraspinal muscle and lesion were measured. The overall image quality include subjective noise, sharpness, artifacts and diagnostic confidence were assessed by two radiologists blindly using a 5-point scale (1, unacceptable and 5, excellent). Dose between AP and DP was compared, and image quality among different reconstructions were compared using SPSS20.0.

RESULTS : Compared to AP, DP significantly reduced radiation dose by 76% (0.76 ± 0.09 mSv vs 3.18 ± 0.48 mSv), DLIR-H DP images had lower image noise (14.08 ± 2.89 HU vs 16.67 ± 3.74 HU, p < 0.001) but similar overall image quality score as the ASIR-V50% AP images (3.88 ± 0.34 vs 4.05 ± 0.44, p > 0.05). For the DP images, DLIR-H significantly reduced image noise in hepatic parenchyma, spleen, muscle and lesion to (14.77 ± 2.61 HU, 14.26 ± 2.67 HU, 14.08 ± 2.89 HU and 16.25 ± 4.42 HU) from (24.95 ± 4.32 HU, 25.42 ± 4.99 HU, 23.99 ± 5.26 HU and 27.01 ± 7.11) with ASIR-V50%, respectively (all p < 0.001) and improved image quality score (3.88 ± 0.34 vs 2.87 ± 0.53; p < 0.05).

CONCLUSION : DLIR-H significantly reduces image noise and generates images with clinically acceptable quality and diagnostic confidence with 76% dose reduction.

ADVANCES IN KNOWLEDGE : (1) DLIR-H yielded a significantly lower image noise, higher CNR and higher overall image quality score and diagnostic confidence than the ASIR-V50% under low signal conditions. (2) Our study demonstrated that at 76% lower radiation dose, the DLIR-H DP images had similar overall image quality to the routine-dose ASIR-V50% AP images.

Cao Le, Liu Xiang, Li Jianying, Qu Tingting, Chen Lihong, Cheng Yannan, Hu Jieliang, Sun Jingtao, Guo Jianxin

2020-Nov-26

Radiology Radiology

Density-based clustering of static and dynamic functional MRI connectivity features obtained from subjects with cognitive impairment.

In Brain informatics

Various machine-learning classification techniques have been employed previously to classify brain states in healthy and disease populations using functional magnetic resonance imaging (fMRI). These methods generally use supervised classifiers that are sensitive to outliers and require labeling of training data to generate a predictive model. Density-based clustering, which overcomes these issues, is a popular unsupervised learning approach whose utility for high-dimensional neuroimaging data has not been previously evaluated. Its advantages include insensitivity to outliers and ability to work with unlabeled data. Unlike the popular k-means clustering, the number of clusters need not be specified. In this study, we compare the performance of two popular density-based clustering methods, DBSCAN and OPTICS, in accurately identifying individuals with three stages of cognitive impairment, including Alzheimer's disease. We used static and dynamic functional connectivity features for clustering, which captures the strength and temporal variation of brain connectivity respectively. To assess the robustness of clustering to noise/outliers, we propose a novel method called recursive-clustering using additive-noise (R-CLAN). Results demonstrated that both clustering algorithms were effective, although OPTICS with dynamic connectivity features outperformed in terms of cluster purity (95.46%) and robustness to noise/outliers. This study demonstrates that density-based clustering can accurately and robustly identify diagnostic classes in an unsupervised way using brain connectivity.

Rangaprakash D, Odemuyiwa Toluwanimi, Narayana Dutt D, Deshpande Gopikrishna

2020-Nov-26

Brain networks and dynamic connectivity, Cognitive impairment and alzheimer’s disease, DBSCAN, Functional MRI, OPTICS, Unsupervised learning and clustering

General General

Generative Adversarial Networks in Medical Image Processing.

In Current pharmaceutical design ; h5-index 57.0

BACKGROUND : The emergence of generative adversarial networks (GANs) has provided a new technology and framework for the application of medical images. Specifically, a GAN requires little to no labeled data to obtain highquality data that can be generated through competition between the generator and discriminator networks. Therefore, GANs are rapidly proving to be a state-of-the-art foundation, achieving enhanced performances in various medical applications.

METHODS : In this article, we introduce the principles of GANs and their various variants, deep convolutional GAN, conditional GAN, Wasserstein GAN, Info-GAN, boundary equilibrium GAN, and cycle-GAN.

RESULTS : All various GANs have found success in medical imaging tasks, including medical image enhancement, segmentation, classification, reconstruction, and synthesis. Furthermore, we summarize the data processing methods and evaluation indicators. Finally, we note the limitations of existing methods and the existing challenges that need to be addressed in this field.

CONCLUSION : Although GANs are in initial stage of development in medical image processing, it will have a great prospect in the future.

Gong Meiqin, Chen Siyu, Chen Qingyuan, Zeng Yuanqi, Zhang Yongqing

2020-Nov-24

Deep learning, Generative adversarial networks, Medical image processing

General General

Automated Diagnosis of Various Gastrointestinal Lesions Using a Deep Learning-Based Classification and Retrieval Framework With a Large Endoscopic Database: Model Development and Validation.

In Journal of medical Internet research ; h5-index 88.0

BACKGROUND : The early diagnosis of various gastrointestinal diseases can lead to effective treatment and reduce the risk of many life-threatening conditions. Unfortunately, various small gastrointestinal lesions are undetectable during early-stage examination by medical experts. In previous studies, various deep learning-based computer-aided diagnosis tools have been used to make a significant contribution to the effective diagnosis and treatment of gastrointestinal diseases. However, most of these methods were designed to detect a limited number of gastrointestinal diseases, such as polyps, tumors, or cancers, in a specific part of the human gastrointestinal tract.

OBJECTIVE : This study aimed to develop a comprehensive computer-aided diagnosis tool to assist medical experts in diagnosing various types of gastrointestinal diseases.

METHODS : Our proposed framework comprises a deep learning-based classification network followed by a retrieval method. In the first step, the classification network predicts the disease type for the current medical condition. Then, the retrieval part of the framework shows the relevant cases (endoscopic images) from the previous database. These past cases help the medical expert validate the current computer prediction subjectively, which ultimately results in better diagnosis and treatment.

RESULTS : All the experiments were performed using 2 endoscopic data sets with a total of 52,471 frames and 37 different classes. The optimal performances obtained by our proposed method in accuracy, F1 score, mean average precision, and mean average recall were 96.19%, 96.99%, 98.18%, and 95.86%, respectively. The overall performance of our proposed diagnostic framework substantially outperformed state-of-the-art methods.

CONCLUSIONS : This study provides a comprehensive computer-aided diagnosis framework for identifying various types of gastrointestinal diseases. The results show the superiority of our proposed method over various other recent methods and illustrate its potential for clinical diagnosis and treatment. Our proposed network can be applicable to other classification domains in medical imaging, such as computed tomography scans, magnetic resonance imaging, and ultrasound sequences.

Owais Muhammad, Arsalan Muhammad, Mahmood Tahir, Kang Jin Kyu, Park Kang Ryoung

2020-Nov-26

artificial intelligence, computer-aided diagnosis, content-based medical image retrieval, deep learning, endoscopic video retrieval, polyp detection

Surgery Surgery

Automated Magnetic Resonance Image Segmentation of the Anterior Cruciate Ligament.

In Journal of orthopaedic research : official publication of the Orthopaedic Research Society

The objective of this work was to develop an automated segmentation method for the anterior cruciate ligament that is capable of facilitating quantitative assessments of ligament in clinical and research settings. A modified U-Net fully convolutional network model was trained, validated, and tested on 246 Constructive Interference in Steady State magnetic resonance images of intact anterior cruciate ligaments. Overall model performance was assessed on the image set relative to an experienced (>5 years) "ground truth" segmenter in two domains: anatomical similarity and the accuracy of quantitative measurements (i.e. signal intensity and volume) obtained from the automated segmentation. To establish model reliability relative to manual segmentation, a subset of the imaging data was re-segmented by the ground truth segmenter and two additional segmenters (A: 6 months, B: 2 years of experience), with their performance evaluated relative to the ground truth. The final model scored well on anatomical performance metrics (Dice coefficient=.84, precision=.82, sensitivity=.85). The median signal intensities and volumes of the automated segmentations were not significantly different from ground truth (0.3% difference, p=.9; 2.3% difference, p=.08, respectively). When the model results were compared to the independent segmenters, the model predictions demonstrated greater median Dice coefficient (A=.73, p=.001; B=.77, p=NS) and sensitivity (A=.68, p=.001; B=.72, p=.003). The model performed equivalently well to re-test segmentation by the ground truth segmenter on all measures. The quantitative measures extracted from the automated segmentation model did not differ from those of manual segmentation, enabling their use in quantitative MRI pipelines to evaluate the anterior cruciate ligament. This article is protected by copyright. All rights reserved.

Flannery Sean W, Kiapour Ata M, Edgar David J, Murray Martha M, Fleming Braden C

2020-Nov-26

anterior cruciate ligament, automated segmentation, deep learning, knee, magnetic resonance imaging

oncology Oncology

Machine Learning-Based Risk Assessment for Cancer Therapy-Related Cardiac Dysfunction in 4300 Longitudinal Oncology Patients.

In Journal of the American Heart Association ; h5-index 70.0

Background The growing awareness of cardiovascular toxicity from cancer therapies has led to the emerging field of cardio-oncology, which centers on preventing, detecting, and treating patients with cardiac dysfunction before, during, or after cancer treatment. Early detection and prevention of cancer therapy-related cardiac dysfunction (CTRCD) play important roles in precision cardio-oncology. Methods and Results This retrospective study included 4309 cancer patients between 1997 and 2018 whose laboratory tests and cardiovascular echocardiographic variables were collected from the Cleveland Clinic institutional electronic medical record database (Epic Systems). Among these patients, 1560 (36%) were diagnosed with at least 1 type of CTRCD, and 838 (19%) developed CTRCD after cancer therapy (de novo). We posited that machine learning algorithms can be implemented to predict CTRCDs in cancer patients according to clinically relevant variables. Classification models were trained and evaluated for 6 types of cardiovascular outcomes, including coronary artery disease (area under the receiver operating characteristic curve [AUROC], 0.821; 95% CI, 0.815-0.826), atrial fibrillation (AUROC, 0.787; 95% CI, 0.782-0.792), heart failure (AUROC, 0.882; 95% CI, 0.878-0.887), stroke (AUROC, 0.660; 95% CI, 0.650-0.670), myocardial infarction (AUROC, 0.807; 95% CI, 0.799-0.816), and de novo CTRCD (AUROC, 0.802; 95% CI, 0.797-0.807). Model generalizability was further confirmed using time-split data. Model inspection revealed several clinically relevant variables significantly associated with CTRCDs, including age, hypertension, glucose levels, left ventricular ejection fraction, creatinine, and aspartate aminotransferase levels. Conclusions This study suggests that machine learning approaches offer powerful tools for cardiac risk stratification in oncology patients by utilizing large-scale, longitudinal patient data from healthcare systems.

Zhou Yadi, Hou Yuan, Hussain Muzna, Brown Sherry-Ann, Budd Thomas, Tang W H Wilson, Abraham Jame, Xu Bo, Shah Chirag, Moudgil Rohit, Popovic Zoran, Cho Leslie, Kanj Mohamed, Watson Chris, Griffin Brian, Chung Mina K, Kapadia Samir, Svensson Lars, Collier Patrick, Cheng Feixiong

2020-Nov-26

anthracycline therapy, cancer therapy–related cardiac dysfunction, cardiotoxicity, cardio‐oncology, echocardiography, machine learning

Radiology Radiology

Convolutional neural network for discriminating nasopharyngeal carcinoma and benign hyperplasia on MRI.

In European radiology ; h5-index 62.0

OBJECTIVES : A convolutional neural network (CNN) was adapted to automatically detect early-stage nasopharyngeal carcinoma (NPC) and discriminate it from benign hyperplasia on a non-contrast-enhanced MRI sequence for potential use in NPC screening programs.

METHODS : We retrospectively analyzed 412 patients who underwent T2-weighted MRI, 203 of whom had biopsy-proven primary NPC confined to the nasopharynx (stage T1) and 209 had benign hyperplasia without NPC. Thirteen patients were sampled randomly to monitor the training process. We applied the Residual Attention Network architecture, adapted for three-dimensional MR images, and incorporated a slice-attention mechanism, to produce a CNN score of 0-1 for NPC probability. Threefold cross-validation was performed in 399 patients. CNN scores between the NPC and benign hyperplasia groups were compared using Student's t test. Receiver operating characteristic with the area under the curve (AUC) was performed to identify the optimal CNN score threshold.

RESULTS : In each fold, significant differences were observed in the CNN scores between the NPC and benign hyperplasia groups (p < .01). The AUCs ranged from 0.95 to 0.97 with no significant differences between the folds (p = .35 to .92). The combined AUC from all three folds (n = 399) was 0.96, with an optimal CNN score threshold of > 0.71, producing a sensitivity, specificity, and accuracy of 92.4%, 90.6%, and 91.5%, respectively, for NPC detection.

CONCLUSION : Our CNN method applied to T2-weighted MRI could discriminate between malignant and benign tissues in the nasopharynx, suggesting that it as a promising approach for the automated detection of early-stage NPC.

KEY POINTS : • The convolutional neural network (CNN)-based algorithm could automatically discriminate between malignant and benign diseases using T2-weighted fat-suppressed MR images. • The CNN-based algorithm had an accuracy of 91.5% with an area under the receiver operator characteristic curve of 0.96 for discriminating early-stage T1 nasopharyngeal carcinoma from benign hyperplasia. • The CNN-based algorithm had a sensitivity of 92.4% and specificity of 90.6% for detecting early-stage nasopharyngeal carcinoma.

Wong Lun M, King Ann D, Ai Qi Yong H, Lam W K Jacky, Poon Darren M C, Ma Brigette B Y, Chan K C Allen, Mo Frankie K F

2020-Nov-25

Computational neural network, Deep learning, Early detection of cancer, Hyperplasia, Nasopharyngeal carcinoma

Surgery Surgery

Machine learning-based diagnostic method of pre-therapeutic 18F-FDG PET/CT for evaluating mediastinal lymph nodes in non-small cell lung cancer.

In European radiology ; h5-index 62.0

OBJECTIVES : We aimed to find the best machine learning (ML) model using 18F-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) for evaluating metastatic mediastinal lymph nodes (MedLNs) in non-small cell lung cancer, and compare the diagnostic results with those of nuclear medicine physicians.

METHODS : A total of 1329 MedLNs were reviewed. Boosted decision tree, logistic regression, support vector machine, neural network, and decision forest models were compared. The diagnostic performance of the best ML model was compared with that of physicians. The ML method was divided into ML with quantitative variables only (MLq) and adding clinical information (MLc). We performed an analysis based on the 18F-FDG-avidity of the MedLNs.

RESULTS : The boosted decision tree model obtained higher sensitivity and negative predictive values but lower specificity and positive predictive values than the physicians. There was no significant difference between the accuracy of the physicians and MLq (79.8% vs. 76.8%, p = 0.067). The accuracy of MLc was significantly higher than that of the physicians (81.0% vs. 76.8%, p = 0.009). In MedLNs with low 18F-FDG-avidity, ML had significantly higher accuracy than the physicians (70.0% vs. 63.3%, p = 0.018).

CONCLUSION : Although there was no significant difference in accuracy between the MLq and physicians, the diagnostic performance of MLc was better than that of MLq or of the physicians. The ML method appeared to be useful for evaluating low metabolic MedLNs. Therefore, adding clinical information to the quantitative variables from 18F-FDG PET/CT can improve the diagnostic results of ML.

KEY POINTS : • Machine learning using two-class boosted decision tree model revealed the highest value of area under curve, and it showed higher sensitivity and negative predictive values but lower specificity and positive predictive values than nuclear medicine physicians. • The diagnostic results from machine learning method after adding clinical information to the quantitative variables improved accuracy significantly than nuclear medicine physicians. • Machine learning could improve the diagnostic significance of metastatic mediastinal lymph nodes, especially in mediastinal lymph nodes with low 18F-FDG-avidity.

Yoo Jang, Cheon Miju, Park Yong Jin, Hyun Seung Hyup, Zo Jae Ill, Um Sang-Won, Won Hong-Hee, Lee Kyung-Han, Kim Byung-Tae, Choi Joon Young

2020-Nov-25

18F-FDG PET/CT, Lymph nodes, Machine learning, Non-small cell lung cancer

Cardiology Cardiology

Deep learning algorithm to improve hypertrophic cardiomyopathy mutation prediction using cardiac cine images.

In European radiology ; h5-index 62.0

OBJECTIVES : The high variability of hypertrophic cardiomyopathy (HCM) genetic phenotypes has prompted the establishment of risk-stratification systems that predict the risk of a positive genetic mutation based on clinical and echocardiographic profiles. This study aims to improve mutation-risk prediction by extracting cardiovascular magnetic resonance (CMR) morphological features using a deep learning algorithm.

METHODS : We recruited 198 HCM patients (48% men, aged 47 ± 13 years) and divided them into training (147 cases) and test (51 cases) sets based on different genetic testing institutions and CMR scan dates (2012, 2013, respectively). All patients underwent CMR examinations, HCM genetic testing, and an assessment of established genotype scores (Mayo Clinic score I, Mayo Clinic score II, and Toronto score). A deep learning (DL) model was developed to classify the HCM genotypes, based on a nonenhanced four-chamber view of cine images.

RESULTS : The areas under the curve (AUCs) for the test set were Mayo Clinic score I (AUC: 0.64, sensitivity: 64.29%, specificity: 47.83%), Mayo Clinic score II (AUC: 0.70, sensitivity: 64.29%, specificity: 65.22%), Toronto score (AUC: 0.74, sensitivity: 75.00%, specificity: 56.52%), and DL model (AUC: 0.80, sensitivity: 85.71%, specificity: 69.57%). The combination of the DL and the Toronto score resulted in a significantly higher predictive performance (AUC = 0.84, sensitivity: 83.33%, specificity: 78.26%), compared with Mayo I (p = 006), Mayo II (p = 022), and Toronto score (p = 0.029).

CONCLUSIONS : The combination of the DL model, based on nonenhanced cine CMR images and the Toronto score yielded significantly higher diagnostic performance in detecting HCM mutations.

KEY POINTS : • Deep learning method could enable the extraction of image features from cine images. • Deep learning method based on cine images performed better than established scores in identifying HCM patients with positive genotypes. • The combination of the deep learning method based on cine images and the Toronto score could further improve the performance of the identification of HCM patients with positive genotypes.

Zhou Hongyu, Li Lu, Liu Zhenyu, Zhao Kankan, Chen Xiuyu, Lu Minjie, Yin Gang, Song Lei, Zhao Shihua, Zheng Hairong, Tian Jie

2020-Nov-25

Cardiomyopathy, hypertrophic, Deep learning, Genotype, Magnetic resonance imaging

General General

Estimation of nitrogen and phosphorus concentrations from water quality surrogates using machine learning in the Tri An Reservoir, Vietnam.

In Environmental monitoring and assessment

Surface water eutrophication due to excessive nutrients has become a major environmental problem around the world in the past few decades. Among these nutrients, nitrogen and phosphorus are two of the most important harmful cyanobacterial bloom (HCB) drivers. A reliable prediction of these parameters, therefore, is necessary for the management of rivers, lakes, and reservoirs. The aim of this study is to test the suitability of the powerful machine learning (ML) algorithm, random forest (RF), to provide information on water quality parameters for the Tri An Reservoir (TAR). Three species of nitrogen and phosphorus, including nitrite (N-NO2-), nitrate (N-NO3-), and phosphate (P-PO43-), were empirically estimated using the field observation dataset (2009-2014) of six surrogates of total suspended solids (TSS), total dissolved solids (TDS), turbidity, electrical conductivity (EC), chemical oxygen demand (COD), and biochemical oxygen demand (BOD5). Field data measurement showed that water quality in the TAR was eutrophic with an up-trend of N-NO3- and P-PO43- during the study period. The RF regression model was reliable for N-NO2-, N-NO3-, and P-PO43- prediction with a high R2 of 0.812-0.844 for the training phase (2009-2012) and 0.888-0.903 for the validation phase (2013-2014). The results of land use and land cover change (LUCC) revealed that deforestation and shifting agriculture in the upper region of the basin were the major factors increasing nutrient loading in the TAR. Among the meteorological parameters, rainfall pattern was found to be one of the most influential factors in eutrophication, followed by average sunshine hour. Our results are expected to provide an advanced assessment tool for predicting nutrient loading and for giving an early warning of HCB in the TAR.

Ha Nam-Thang, Nguyen Hao Quang, Truong Nguyen Cung Que, Le Thi Luom, Thai Van Nam, Pham Thanh Luu

2020-Nov-26

Harmful cyanobacterial blooms, Random forest, Tri An eutrophic reservoir, Water quality

General General

COSIFER: a Python package for the consensus inference of molecular interaction networks.

In Bioinformatics (Oxford, England)

SUMMARY : The advent of high-throughput technologies has provided researchers with measurements of thousands of molecular entities and enable the investigation of the internal regulatory apparatus of the cell. However, network inference from high-throughput data is far from being a solved problem. While a plethora of different inference methods have been proposed, they often lead to non-overlapping predictions, and many of them lack user-friendly implementations to enable their broad utilization. Here, we present Consensus Interaction Network Inference Service (COSIFER), a package and a companion web-based platform to infer molecular networks from expression data using state-of-the-art consensus approaches. COSIFER includes a selection of state-of-the-art methodologies for network inference and different consensus strategies to integrate the predictions of individual methods and generate robust networks.

AVAILABILITY AND IMPLEMENTATION : COSIFER Python source code is available at https://github.com/PhosphorylatedRabbits/cosifer. The web service is accessible at https://ibm.biz/cosifer-aas.

SUPPLEMENTARY INFORMATION : Supplementary data are available at Bioinformatics online.

Manica Matteo, Bunne Charlotte, Mathis Roland, Cadow Joris, Ahsen Mehmet Eren, Stolovitzky Gustavo A, Martínez María Rodríguez

2020-Nov-02

General General

Self-supervised feature extraction from image time series in plant phenotyping using triplet networks.

In Bioinformatics (Oxford, England)

MOTIVATION : Image-based profiling combines high-throughput screening with multiparametric feature analysis to capture the effect of perturbations on biological systems. This technology has attracted increasing interest in the field of plant phenotyping, promising to accelerate the discovery of novel herbicides. However, the extraction of meaningful features from unlabeled plant images remains a big challenge.

RESULTS : We describe a novel data-driven approach to find feature representations from plant time-series images in a self-supervised manner by using time as a proxy for image similarity. In the spirit of transfer learning, we first apply an ImageNet-pretrained architecture as a base feature extractor. Then, we extend this architecture with a triplet network to refine and reduce the dimensionality of extracted features by ranking relative similarities between consecutive and non-consecutive time points. Without using any labels, we produce compact, organized representations of plant phenotypes and demonstrate their superior applicability to clustering, image retrieval and classification tasks. Besides time, our approach could be applied using other surrogate measures of phenotype similarity, thus providing a versatile method of general interest to the phenotypic profiling community.

AVAILABILITY AND IMPLEMENTATION : Source code is provided in https://github.com/bayer-science-for-a-better-life/plant-triplet-net.

SUPPLEMENTARY INFORMATION : Supplementary data are available at Bioinformatics online.

Marin Zapata Paula A, Roth Sina, Schmutzler Dirk, Wolf Thomas, Manesso Erica, Clevert Djork-Arné

2020-Oct-20

General General

Trajectories, bifurcations, and pseudo-time in large clinical datasets: applications to myocardial infarction and diabetes data.

In GigaScience

BACKGROUND : Large observational clinical datasets are becoming increasingly available for mining associations between various disease traits and administered therapy. These datasets can be considered as representations of the landscape of all possible disease conditions, in which a concrete disease state develops through stereotypical routes, characterized by "points of no return" and "final states" (such as lethal or recovery states). Extracting this information directly from the data remains challenging, especially in the case of synchronic (with a short-term follow-up) observations.

RESULTS : Here we suggest a semi-supervised methodology for the analysis of large clinical datasets, characterized by mixed data types and missing values, through modeling the geometrical data structure as a bouquet of bifurcating clinical trajectories. The methodology is based on application of elastic principal graphs, which can address simultaneously the tasks of dimensionality reduction, data visualization, clustering, feature selection, and quantifying the geodesic distances (pseudo-time) in partially ordered sequences of observations. The methodology allows a patient to be positioned on a particular clinical trajectory (pathological scenario) and the degree of progression along it to be characterized with a qualitative estimate of the uncertainty of the prognosis. We developed a tool ClinTrajan for clinical trajectory analysis implemented in the Python programming language. We test the methodology in 2 large publicly available datasets: myocardial infarction complications and readmission of diabetic patients data.

CONCLUSIONS : Our pseudo-time quantification-based approach makes it possible to apply the methods developed for dynamical disease phenotyping and illness trajectory analysis (diachronic data analysis) to synchronic observational data.

Golovenkin Sergey E, Bac Jonathan, Chervov Alexander, Mirkes Evgeny M, Orlova Yuliya V, Barillot Emmanuel, Gorban Alexander N, Zinovyev Andrei

2020-Nov-25

clinical data, clinical trajectory, data analysis, diabetes, dimensionality reduction, dynamical diseases phenotyping, myocardial infarction, patient disease pathway, principal trees, pseudo-time

General General

Application of Machine Learning Techniques in Drug-Target Interactions Prediction.

In Current pharmaceutical design ; h5-index 57.0

BACKGROUND : Drug-Target interactions are vital for drug design and drug repositioning. However, traditional lab experiments are both expensive and time-consuming. Various computational methods which applied machine learning techniques performed efficiently and effectively in the field.

RESULTS : The machine learning methods can be divided into three categories basically: Supervised methods, SemiSupervised methods and Unsupervised methods. We reviewed recent representative methods applying machine learning techniques of each category in DTIs and summarized a brief list of databases frequently used in drug discovery. In addition, we compared the advantages and limitations of these methods in each category.

CONCLUSION : Every prediction model has its both strengths and weaknesses and should be adopted in proper ways. Three major problems in DTIs prediction including the lack of nonreactive drug-target pairs data sets, overoptimistic results due to the biases and the exploiting of regression models on DTIs prediction should be seriously considered.

Zhang Shengli, Wang Jiesheng, Lin Zhenhui, Liang Yunyun

2020-Nov-24

computational methods, drug discovery, drug-target interactions prediction, machine learning, semi-supervised learning, supervised\nlearning, unsupervised learning\n

General General

Dataset of sodium chloride sterile liquid in bottles for intravenous administration and fill level monitoring.

In Data in brief

We propose a dataset to investigate the relationship between the fill level of bottles and tiny machine learning algorithms. Tiny machine learning is represented by any Artificial Intelligence algorithm (spanning from conventional decision tree classifiers to artificial neural networks) that can be deployed into a resource constrained micro controller unit (MCU). The data presented has been originally collected for a joint research project by STMicroelectronics and Sesovera.ai. This article describes the recorded image data of bottles with 4 levels of filling. The bottles contain sodium chloride sterile liquid for intravenous administration. One subject of investigation using this dataset could be the classification of the liquid fill level, for example, to ease continuous human visual monitoring which may represent an onerous time-consuming task. Automating the task can help to increase the human work productivity thus saving time. Under normal circumstances, human visual monitoring of the saline level in the bottle is required from time to time. When the saline liquid in the bottle is fully consumed, and the bottle is not replaced or the infusion process stopped immediately, the difference between the patient's blood pressure and the empty saline bottle could cause an outward rush of blood into the saline.

Pau Danilo, Kumar Bipin P, Namekar Prashant, Dhande Gauri, Simonetta Luca

2020-Dec

Fill level of bottles, Saline solution, Sodium chloride liquid, Visual monitoring

Surgery Surgery

Machine Learning Outperforms Regression Analysis to Predict Next-Season Major League Baseball Player Injuries: Epidemiology and Validation of 13,982 Player-Years From Performance and Injury Profile Trends, 2000-2017.

In Orthopaedic journal of sports medicine

Background : Machine learning (ML) allows for the development of a predictive algorithm capable of imbibing historical data on a Major League Baseball (MLB) player to accurately project the player's future availability.

Purpose : To determine the validity of an ML model in predicting the next-season injury risk and anatomic injury location for both position players and pitchers in the MLB.

Study Design : Descriptive epidemiology study.

Methods : Using 4 online baseball databases, we compiled MLB player data, including age, performance metrics, and injury history. A total of 84 ML algorithms were developed. The output of each algorithm reported whether the player would sustain an injury the following season as well as the injury's anatomic site. The area under the receiver operating characteristic curve (AUC) primarily determined validation.

Results : Player data were generated from 1931 position players and 1245 pitchers, with a mean follow-up of 4.40 years (13,982 player-years) between the years of 2000 and 2017. Injured players spent a total of 108,656 days on the disabled list, with a mean of 34.21 total days per player. The mean AUC for predicting next-season injuries was 0.76 among position players and 0.65 among pitchers using the top 3 ensemble classification. Back injuries had the highest AUC among both position players and pitchers, at 0.73. Advanced ML models outperformed logistic regression in 13 of 14 cases.

Conclusion : Advanced ML models generally outperformed logistic regression and demonstrated fair capability in predicting publicly reportable next-season injuries, including the anatomic region for position players, although not for pitchers.

Karnuta Jaret M, Luu Bryan C, Haeberle Heather S, Saluan Paul M, Frangiamore Salvatore J, Stearns Kim L, Farrow Lutul D, Nwachukwu Benedict U, Verma Nikhil N, Makhni Eric C, Schickendantz Mark S, Ramkumar Prem N

2020-Nov

injury prediction, injury prevention, machine learning

General General

DeepLN: an artificial intelligence-based automated system for lung cancer screening.

In Annals of translational medicine

Background : Lung cancer causes more deaths worldwide than any other cancer. For early-stage patients, low-dose computed tomography (LDCT) of the chest is considered to be an effective screening measure for reducing the risk of mortality. The accuracy and efficiency of cancer screening would be enhanced by an intelligent and automated system that meets or surpasses the diagnostic capabilities of human experts.

Methods : Based on the artificial intelligence (AI) technique, i.e., deep neural network (DNN), we designed a framework for lung cancer screening. First, a semi-automated annotation strategy was used to label the images for training. Then, the DNN-based models for the detection of lung nodules (LNs) and benign or malignancy classification were proposed to identify lung cancer from LDCT images. Finally, the constructed DNN-based LN detection and identification system was named as DeepLN and confirmed using a large-scale dataset.

Results : A dataset of multi-resolution LDCT images was constructed and annotated by a multidisciplinary group and used to train and evaluate the proposed models. The sensitivity of LN detection was 96.5% and 89.6% in a thin section subset [the free-response receiver operating characteristic (FROC) is 0.716] and a thick section subset (the FROC is 0.699), respectively. With an accuracy of 92.46%±0.20%, a specificity of 95.93%±0.47%, and a precision of 90.46%±0.93%, an ensemble result of benign or malignancy identification demonstrated a very good performance. Three retrospective clinical comparisons of the DeepLN system with human experts showed a high detection accuracy of 99.02%.

Conclusions : In this study, we presented an AI-based system with the potential to improve the performance and work efficiency of radiologists in lung cancer screening. The effectiveness of the proposed system was verified through retrospective clinical evaluation. Thus, the future application of this system is expected to help patients and society.

Guo Jixiang, Wang Chengdi, Xu Xiuyuan, Shao Jun, Yang Lan, Gan Yuncui, Yi Zhang, Li Weimin

2020-Sep

Deep neural networks (DNNs), lung cancer screening, lung nodule (LN) detection, malignancy identification

Radiology Radiology

Radiomics Signatures of Cardiovascular Risk Factors in Cardiac MRI: Results From the UK Biobank.

In Frontiers in cardiovascular medicine

Cardiovascular magnetic resonance (CMR) radiomics is a novel technique for advanced cardiac image phenotyping by analyzing multiple quantifiers of shape and tissue texture. In this paper, we assess, in the largest sample published to date, the performance of CMR radiomics models for identifying changes in cardiac structure and tissue texture due to cardiovascular risk factors. We evaluated five risk factor groups from the first 5,065 UK Biobank participants: hypertension (n = 1,394), diabetes (n = 243), high cholesterol (n = 779), current smoker (n = 320), and previous smoker (n = 1,394). Each group was randomly matched with an equal number of healthy comparators (without known cardiovascular disease or risk factors). Radiomics analysis was applied to short axis images of the left and right ventricles at end-diastole and end-systole, yielding a total of 684 features per study. Sequential forward feature selection in combination with machine learning (ML) algorithms (support vector machine, random forest, and logistic regression) were used to build radiomics signatures for each specific risk group. We evaluated the degree of separation achieved by the identified radiomics signatures using area under curve (AUC), receiver operating characteristic (ROC), and statistical testing. Logistic regression with L1-regularization was the optimal ML model. Compared to conventional imaging indices, radiomics signatures improved the discrimination of risk factor vs. healthy subgroups as assessed by AUC [diabetes: 0.80 vs. 0.70, hypertension: 0.72 vs. 0.69, high cholesterol: 0.71 vs. 0.65, current smoker: 0.68 vs. 0.65, previous smoker: 0.63 vs. 0.60]. Furthermore, we considered clinical interpretation of risk-specific radiomics signatures. For hypertensive individuals and previous smokers, the surface area to volume ratio was smaller in the risk factor vs. healthy subjects; perhaps reflecting a pattern of global concentric hypertrophy in these conditions. In the diabetes subgroup, the most discriminatory radiomics feature was the median intensity of the myocardium at end-systole, which suggests a global alteration at the myocardial tissue level. This study confirms the feasibility and potential of CMR radiomics for deeper image phenotyping of cardiovascular health and disease. We demonstrate such analysis may have utility beyond conventional CMR metrics for improved detection and understanding of the early effects of cardiovascular risk factors on cardiac structure and tissue.

Cetin Irem, Raisi-Estabragh Zahra, Petersen Steffen E, Napel Sandy, Piechnik Stefan K, Neubauer Stefan, Gonzalez Ballester Miguel A, Camara Oscar, Lekadir Karim

2020

UK biobank, cardiovascular magnetic resonance, cardiovascular risk factors, machine learning, radiomics

oncology Oncology

PTMsnp: A Web Server for the Identification of Driver Mutations That Affect Protein Post-translational Modification.

In Frontiers in cell and developmental biology

High-throughput sequencing technologies have identified millions of genetic mutations in multiple human diseases. However, the interpretation of the pathogenesis of these mutations and the discovery of driver genes that dominate disease progression is still a major challenge. Combining functional features such as protein post-translational modification (PTM) with genetic mutations is an effective way to predict such alterations. Here, we present PTMsnp, a web server that implements a Bayesian hierarchical model to identify driver genetic mutations targeting PTM sites. PTMsnp accepts genetic mutations in a standard variant call format or tabular format as input and outputs several interactive charts of PTM-related mutations that potentially affect PTMs. Additional functional annotations are performed to evaluate the impact of PTM-related mutations on protein structure and function, as well as to classify variants relevant to Mendelian disease. A total of 4,11,574 modification sites from 33 different types of PTMs and 1,776,848 somatic mutations from TCGA across 33 different cancer types are integrated into the web server, enabling identification of candidate cancer driver genes based on PTM. Applications of PTMsnp to the cancer cohorts and a GWAS dataset of type 2 diabetes identified a set of potential drivers together with several known disease-related genes, indicating its reliability in distinguishing disease-related mutations and providing potential molecular targets for new therapeutic strategies. PTMsnp is freely available at: http://ptmsnp.renlab.org.

Peng Di, Li Huiqin, Hu Bosu, Zhang Hongwan, Chen Li, Lin Shaofeng, Zuo Zhixiang, Xue Yu, Ren Jian, Xie Yubin

2020

Bayesian hierarchical model, disease, driver genes, genetic mutations, protein post-translational modification

General General

Testing the Generalizability of an Automated Method for Explaining Machine Learning Predictions on Asthma Patients' Asthma Hospital Visits to an Academic Healthcare System.

In IEEE access : practical innovations, open solutions

Asthma puts a tremendous overhead on healthcare. To enable effective preventive care to improve outcomes in managing asthma, we recently created two machine learning models, one using University of Washington Medicine data and the other using Intermountain Healthcare data, to predict asthma hospital visits in the next 12 months in asthma patients. As is common in machine learning, neither model supplies explanations for its predictions. To tackle this interpretability issue of black-box models, we developed an automated method to produce rule-style explanations for any machine learning model's predictions made on imbalanced tabular data and to recommend customized interventions without lowering the prediction accuracy. Our method exhibited good performance in explaining our Intermountain Healthcare model's predictions. Yet, it stays unknown how well our method generalizes to academic healthcare systems, whose patient composition differs from that of Intermountain Healthcare. This study evaluates our automated explaining method's generalizability to the academic healthcare system University of Washington Medicine on predicting asthma hospital visits. We did a secondary analysis on 82,888 University of Washington Medicine data instances of asthmatic adults between 2011 and 2018, using our method to explain our University of Washington Medicine model's predictions and to recommend customized interventions. Our results showed that for predicting asthma hospital visits, our automated explaining method had satisfactory generalizability to University of Washington Medicine. In particular, our method explained the predictions for 87.6% of the asthma patients whom our University of Washington Medicine model accurately predicted to experience asthma hospital visits in the next 12 months.

Tong Yao, Messinger Amanda I, Luo Gang

2020

Asthma, automated explanation, extreme gradient boosting, machine learning, patient care management, predictive model

Radiology Radiology

Is Artificial Intelligence the New Friend for Radiologists? A Review Article.

In Cureus

Artificial intelligence (AI) is a path-breaking advancement for many industries, including the health care sector. The expeditious development of information technology and data processing has led to the formation of recent tools known as artificial intelligence. Radiology has been a portal for medical technological advancements, and AI will likely be no dissimilar. Radiology is the platform for many technological advances in the medical field; AI can undoubtedly impact every step of a radiologist's workflow. AI can simplify every activity like ordering and scheduling, protocoling and acquisition, image interpretation, reporting, communication, and billing. AI has eminent potential to augment efficiency and accuracy throughout radiology, but it also possesses inherent drawbacks and biases. We collected studies that were published in the past five years using PubMed as our database. We chose studies that were relevant to artificial intelligence in radiology. We mainly focused on the overview of AI in radiology, components included in the functioning of AI, AI assisting in the radiologists' workflow, ethical aspects of AI, challenges, and biases that AI experiencing together with some clinical applications of AI. Of all 33 studies, we found 15 articles discussed the overview and components of AI, five articles about AI affecting radiologist's workflow, five articles related to challenges and biases in AI, two articles discussed ethical aspects of AI, and six articles about practical implications of AI. We found out that the application of AI could make time-dependent tasks that can be performed effortlessly, permitting radiologists more time and opportunities to engage in patient care via increased time for consultation and development in imaging and extracting useful data from those images. AI could only be an aid to radiologists but will not replace a radiologist. Radiologists who use AI to their benefit, rather than to avoid it out of fear, might supersede those radiologists who do not. Substantial research should be done regarding the practical implications of AI algorithms for residents curriculum and the benefits of AI in radiology.

Gampala Sravani, Vankeshwaram Varun, Gadula Satya Siva P

2020-Oct-24

artificial intelligence in radiology, deep learning, machine learning

General General

Stem cell imaging through convolutional neural networks: current issues and future directions in artificial intelligence technology.

In PeerJ

Stem cells are primitive and precursor cells with the potential to reproduce into diverse mature and functional cell types in the body throughout the developmental stages of life. Their remarkable potential has led to numerous medical discoveries and breakthroughs in science. As a result, stem cell-based therapy has emerged as a new subspecialty in medicine. One promising stem cell being investigated is the induced pluripotent stem cell (iPSC), which is obtained by genetically reprogramming mature cells to convert them into embryonic-like stem cells. These iPSCs are used to study the onset of disease, drug development, and medical therapies. However, functional studies on iPSCs involve the analysis of iPSC-derived colonies through manual identification, which is time-consuming, error-prone, and training-dependent. Thus, an automated instrument for the analysis of iPSC colonies is needed. Recently, artificial intelligence (AI) has emerged as a novel technology to tackle this challenge. In particular, deep learning, a subfield of AI, offers an automated platform for analyzing iPSC colonies and other colony-forming stem cells. Deep learning rectifies data features using a convolutional neural network (CNN), a type of multi-layered neural network that can play an innovative role in image recognition. CNNs are able to distinguish cells with high accuracy based on morphologic and textural changes. Therefore, CNNs have the potential to create a future field of deep learning tasks aimed at solving various challenges in stem cell studies. This review discusses the progress and future of CNNs in stem cell imaging for therapy and research.

Ramakrishna Ramanaesh Rao, Abd Hamid Zariyantey, Wan Zaki Wan Mimi Diyana, Huddin Aqilah Baseri, Mathialagan Ramya

2020

Artificial intelligence, Biomedical imaging, Convolutional neural network, Deep learning, Hematopoietic stem cell, Induced pluripotent stem cell, Machine learning, Medical analysis, Morphology and pattern recognition, Stem cell

General General

An in vitro ovarian explant culture system to examine sex change in a hermaphroditic fish.

In PeerJ

Many teleost fishes undergo natural sex change, and elucidating the physiological and molecular controls of this process offers unique opportunities not only to develop methods of controlling sex in aquaculture settings, but to better understand vertebrate sexual development more broadly. Induction of sex change in some sequentially hermaphroditic or gonochoristic fish can be achieved in vivo through social manipulation, inhibition of aromatase activity, or steroid treatment. However, the induction of sex change in vitro has been largely unexplored. In this study, we established an in vitro culture system for ovarian explants in serum-free medium for a model sequential hermaphrodite, the New Zealand spotty wrasse (Notolabrus celidotus). This culture technique enabled evaluating the effect of various treatments with 17β-estradiol (E2), 11-ketotestosterone (11KT) or cortisol (CORT) on spotty wrasse ovarian architecture for 21 days. A quantitative approach to measuring the degree of ovarian atresia within histological images was also developed, using pixel-based machine learning software. Ovarian atresia likely due to culture was observed across all treatments including no-hormone controls, but was minimised with treatment of at least 10 ng/mL E2. Neither 11KT nor CORT administration induced proliferation of spermatogonia (i.e., sex change) in the cultured ovaries indicating culture beyond 21 days may be needed to induce sex change in vitro. The in vitro gonadal culture and analysis systems established here enable future studies investigating the paracrine role of sex steroids, glucocorticoids and a variety of other factors during gonadal sex change in fish.

Goikoetxea Alexander, Damsteegt Erin L, Todd Erica V, McNaughton Andrew, Gemmell Neil J, Lokman P Mark

2020

Cortisol, Organ culture, Previtellogenic oocyte, Sex change, Spotty wrasse

Surgery Surgery

Machine learning prediction of motor response after deep brain stimulation in Parkinson's disease-proof of principle in a retrospective cohort.

In PeerJ

Introduction : Despite careful patient selection for subthalamic nucleus deep brain stimulation (STN DBS), some Parkinson's disease patients show limited improvement of motor disability. Innovative predictive analysing methods hold potential to develop a tool for clinicians that reliably predicts individual postoperative motor response, by only regarding clinical preoperative variables. The main aim of preoperative prediction would be to improve preoperative patient counselling, expectation management, and postoperative patient satisfaction.

Methods : We developed a machine learning logistic regression prediction model which generates probabilities for experiencing weak motor response one year after surgery. The model analyses preoperative variables and is trained on 89 patients using a five-fold cross-validation. Imaging and neurophysiology data are left out intentionally to ensure usability in the preoperative clinical practice. Weak responders (n = 30) were defined as patients who fail to show clinically relevant improvement on Unified Parkinson Disease Rating Scale II, III or IV.

Results : The model predicts weak responders with an average area under the curve of the receiver operating characteristic of 0.79 (standard deviation: 0.08), a true positive rate of 0.80 and a false positive rate of 0.24, and a diagnostic accuracy of 78%. The reported influences of individual preoperative variables are useful for clinical interpretation of the model, but cannot been interpreted separately regardless of the other variables in the model.

Conclusion : The model's diagnostic accuracy confirms the utility of machine learning based motor response prediction based on clinical preoperative variables. After reproduction and validation in a larger and prospective cohort, this prediction model holds potential to support clinicians during preoperative patient counseling.

Habets Jeroen G V, Janssen Marcus L F, Duits Annelien A, Sijben Laura C J, Mulders Anne E P, De Greef Bianca, Temel Yasin, Kuijf Mark L, Kubben Pieter L, Herff Christian

2020

Deep brain stimulation, Outcome, Parkinson’s disease, Prediction, Subthalamic nucleus

Surgery Surgery

[Establishment and test results of an artificial intelligence burn depth recognition model based on convolutional neural network].

In Zhonghua shao shang za zhi = Zhonghua shaoshang zazhi = Chinese journal of burns

Objective: To establish an artificial intelligence burn depth recognition model based on convolutional neural network, and to test its effectiveness. Methods: In this evaluation study on diagnostic test, 484 wound photos of 221 burn patients in Xiangya Hospital of Central South University (hereinafter referred to as the author's unit) from January 2010 to December 2019 taken within 48 hours after injury which met the inclusion criteria were collected and numbered randomly. The target wounds were delineated by image viewing software, and the burn depth was judged by 3 attending doctors with more than 5-year professional experience in Department of Burns and Plastic Surgery of the author's unit. After marking the superficial partial-thickness burn, deep partial-thickness burn, or full-thickness burn in different colors, the burn wounds were cut according to 224×224 pixels to obtain 5 637 complete wound images. The image data generator was used to expand images of each burn depth to 10 000 images, after which, images of each burn depth were divided into training set, verification set, and test set according to the ratio of 7.0∶1.5∶1.5. Under Keras 2.2.4 Python 2.8.0 version, the residual network ResNet-50 of convolutional neural network was used to establish the artificial intelligence burn depth recognition model. The training set was input for training, and the verification set was used to adjust and optimize the model. The judging accuracy rate of various burn depths by the established model was tested by the test set, and precision, recall, and F1_score were calculated. The test results were visualized to generate two-dimensional tSNE cloud chart through the dimensionality reduction tool tSNE, and the distribution of various burn depths was observed. According to the sensitivity and specificity of the model for the recognition of 3 kinds of burn depths, the corresponding receiver operator characteristics (ROC) curve was drawn, and the area under the ROC curve was calculated. Results: (1) After the testing of the test set, the precisions of the artificial intelligence burn depth recognition model for the recognition of superficial partial-thickness burn, deep partial-thickness burn, or full-thickness burn were 84% (1 095/1 301), 81% (1 215/1 499) and 82% (1 395/1 700) respectively, the recall were 73% (1 095/1 500), 81% (1 215/1 500) and 93% (1 395/1 500) respectively, and the F1_scores were 0.78, 0.81, and 0.87 respectively. (2) tSNE cloud chart showed that there was small overlapping among different burn depths in the test results for the test set of artificial intelligence burn depth recognition model, among which the overlapping between superficial partial-thickness burn and deep partial-thickness burn and that between deep partial-thickness burn and full-thickness burn were relatively more, while the overlapping between superficial partial-thickness burn and full-thickness burn was relatively less. (3) The area under the ROC curve for 3 kinds of burn depths recognized by the artificial intelligence burn depth recognition model was ≥0.94. Conclusions: The artificial intelligence burn depth recognition model established by ResNet-50 network can rather accurately identify the burn depth in the early wound photos of burn patients, especially superficial partial-thickness burn and full-thickness burn. It is expected to be used clinically to assist the diagnosis of burn depth and improve the diagnostic accuracy.

He Z Y, Wang Y, Zhang P H, Zuo K, Liang P F, Zeng J Z, Zhou S T, Guo L, Huang M T, Cui X

2020-Nov-20

Artificial intelligence, Burn depth recognition, Burns, Convolutional neural networks, Early diagnosis, Residual network

Public Health Public Health

Whether the weather will help us weather the COVID-19 pandemic: Using machine learning to measure twitter users' perceptions.

In International journal of medical informatics ; h5-index 49.0

OBJECTIVE : The potential ability for weather to affect SARS-CoV-2 transmission has been an area of controversial discussion during the COVID-19 pandemic. Individuals' perceptions of the impact of weather can inform their adherence to public health guidelines; however, there is no measure of their perceptions. We quantified Twitter users' perceptions of the effect of weather and analyzed how they evolved with respect to real-world events and time.

MATERIALS AND METHODS : We collected 166,005 English tweets posted between January 23 and June 22, 2020 and employed machine learning/natural language processing techniques to filter for relevant tweets, classify them by the type of effect they claimed, and identify topics of discussion.

RESULTS : We identified 28,555 relevant tweets and estimate that 40.4 % indicate uncertainty about weather's impact, 33.5 % indicate no effect, and 26.1 % indicate some effect. We tracked changes in these proportions over time. Topic modeling revealed major latent areas of discussion.

DISCUSSION : There is no consensus among the public for weather's potential impact. Earlier months were characterized by tweets that were uncertain of weather's effect or claimed no effect; later, the portion of tweets claiming some effect of weather increased. Tweets claiming no effect of weather comprised the largest class by June. Major topics of discussion included comparisons to influenza's seasonality, President Trump's comments on weather's effect, and social distancing.

CONCLUSION : We exhibit a research approach that is effective in measuring population perceptions and identifying misconceptions, which can inform public health communications.

Gupta Marichi, Bansal Aditya, Jain Bhav, Rochelle Jillian, Oak Atharv, Jalali Mohammad S

2020-Nov-10

Individuals’ perceptions, Machine learning, Opinion mining, SARS-CoV-2 transmission, Topic modeling

Cardiology Cardiology

Big Data and Artificial Intelligence: Opportunities and Threats in Electrophysiology.

In Arrhythmia & electrophysiology review

The combination of big data and artificial intelligence (AI) is having an increasing impact on the field of electrophysiology. Algorithms are created to improve the automated diagnosis of clinical ECGs or ambulatory rhythm devices. Furthermore, the use of AI during invasive electrophysiological studies or combining several diagnostic modalities into AI algorithms to aid diagnostics are being investigated. However, the clinical performance and applicability of created algorithms are yet unknown. In this narrative review, opportunities and threats of AI in the field of electrophysiology are described, mainly focusing on ECGs. Current opportunities are discussed with their potential clinical benefits as well as the challenges. Challenges in data acquisition, model performance, (external) validity, clinical implementation, algorithm interpretation as well as the ethical aspects of AI research are discussed. This article aims to guide clinicians in the evaluation of new AI applications for electrophysiology before their clinical implementation.

van de Leur Rutger R, Boonstra Machteld J, Bagheri Ayoub, Roudijk Rob W, Sammani Arjan, Taha Karim, Doevendans Pieter Afm, van der Harst Pim, van Dam Peter M, Hassink Rutger J, van Es René, Asselbergs Folkert W

2020-Nov

Artificial intelligence, ECG, big data, cardiology, deep learning, electrophysiology, neural networks

General General

The era of big data: Genome-scale modelling meets machine learning.

In Computational and structural biotechnology journal

With omics data being generated at an unprecedented rate, genome-scale modelling has become pivotal in its organisation and analysis. However, machine learning methods have been gaining ground in cases where knowledge is insufficient to represent the mechanisms underlying such data or as a means for data curation prior to attempting mechanistic modelling. We discuss the latest advances in genome-scale modelling and the development of optimisation algorithms for network and error reduction, intracellular constraining and applications to strain design. We further review applications of supervised and unsupervised machine learning methods to omics datasets from microbial and mammalian cell systems and present efforts to harness the potential of both modelling approaches through hybrid modelling.

Antonakoudis Athanasios, Barbosa Rodrigo, Kotidis Pavlos, Kontoravdi Cleo

2020

Cell metabolism, Chinese hamster ovary cells, Flux balance analysis, Hybrid modelling, Principal component analysis, Recombinant protein production, Strain optimisation

General General

Current Trends in Experimental and Computational Approaches to Combat Antimicrobial Resistance.

In Frontiers in genetics ; h5-index 62.0

A multitude of factors, such as drug misuse, lack of strong regulatory measures, improper sewage disposal, and low-quality medicine and medications, have been attributed to the emergence of drug resistant microbes. The emergence and outbreaks of multidrug resistance to last-line antibiotics has become quite common. This is further fueled by the slow rate of drug development and the lack of effective resistome surveillance systems. In this review, we provide insights into the recent advances made in computational approaches for the surveillance of antibiotic resistomes, as well as experimental formulation of combinatorial drugs. We explore the multiple roles of antibiotics in nature and the current status of combinatorial and adjuvant-based antibiotic treatments with nanoparticles, phytochemical, and other non-antibiotics based on synergetic effects. Furthermore, advancements in machine learning algorithms could also be applied to combat the spread of antibiotic resistance. Development of resistance to new antibiotics is quite rapid. Hence, we review the recent literature on discoveries of novel antibiotic resistant genes though shotgun and expression-based metagenomics. To decelerate the spread of antibiotic resistant genes, surveillance of the resistome is of utmost importance. Therefore, we discuss integrative applications of whole-genome sequencing and metagenomics together with machine learning models as a means for state-of-the-art surveillance of the antibiotic resistome. We further explore the interactions and negative effects between antibiotics and microbiomes upon drug administration.

Imchen Madangchanok, Moopantakath Jamseel, Kumavath Ranjith, Barh Debmalya, Tiwari Sandeep, Ghosh Preetam, Azevedo Vasco

2020

antibiotic resistance, metagenomics, multidrug resistance, nanoparticles, next generation sequencing, whole genome sequence

General General

Investigation on Data Fusion of Multisource Spectral Data for Rice Leaf Diseases Identification Using Machine Learning Methods.

In Frontiers in plant science

Rice diseases are major threats to rice yield and quality. Rapid and accurate detection of rice diseases is of great importance for precise disease prevention and treatment. Various spectroscopic techniques have been used to detect plant diseases. To rapidly and accurately detect three different rice diseases [leaf blight (Xanthomonas oryzae pv. Oryzae), rice blast (Pyricularia oryzae), and rice sheath blight (Rhizoctonia solani)], three spectroscopic techniques were applied, including visible/near-infrared hyperspectral imaging (HSI) spectra, mid-infrared spectroscopy (MIR), and laser-induced breakdown spectroscopy (LIBS). Three different levels of data fusion (raw data fusion, feature fusion, and decision fusion) fusing three different types of spectral features were adopted to categorize the diseases of rice. Principal component analysis (PCA) and autoencoder (AE) were used to extract features. Identification models based on each technique and different fusion levels were built using support vector machine (SVM), logistic regression (LR), and convolution neural network (CNN) models. Models based on HSI performed better than those based on MIR and LIBS, with the accuracy over 93% for the test set based on PCA features of HSI spectra. The performance of rice disease identification varied with different levels of fusion. The results showed that feature fusion and decision fusion could enhance identification performance. The overall results illustrated that the three techniques could be used to identify rice diseases, and data fusion strategies have great potential to be used for rice disease detection.

Feng Lei, Wu Baohua, Zhu Susu, Wang Junmin, Su Zhenzhu, Liu Fei, He Yong, Zhang Chu

2020

data fusion, hyperspectral imaging, laser-induced breakdown spectroscopy, mid-infrared spectroscopy, rice disease

General General

Wheat Kernel Variety Identification Based on a Large Near-Infrared Spectral Dataset and a Novel Deep Learning-Based Feature Selection Method.

In Frontiers in plant science

Near-infrared (NIR) hyperspectroscopy becomes an emerging nondestructive sensing technology for inspection of crop seeds. A large spectral dataset of more than 140,000 wheat kernels in 30 varieties was prepared for classification. Feature selection is a critical segment in large spectral data analysis. A novel convolutional neural network-based feature selector (CNN-FS) was proposed to screen out deeply target-related spectral channels. A convolutional neural network with attention (CNN-ATT) framework was designed for one-dimension data classification. Popular machine learning models including support vector machine (SVM) and partial least square discrimination analysis were used as the benchmark classifiers. Features selected by conventional feature selection algorithms were considered for comparison. Results showed that the designed CNN-ATT produced a higher performance than the compared classifier. The proposed CNN-FS found a subset of features, which made a better representation of raw dataset than conventional selectors did. The CNN-ATT achieved an accuracy of 93.01% using the full spectra and keep its high precision (90.20%) by training on the 60-channel features obtained via the CNN-FS method. The proposed methods have great potential for handling the analyzing tasks on other large spectral datasets. The proposed feature selection structure can be extended to design other new model-based selectors. The combination of NIR hyperspectroscopic technology and the proposed models has great potential for automatic nondestructive classification of single wheat kernels.

Zhou Lei, Zhang Chu, Taha Mohamed Farag, Wei Xinhua, He Yong, Qiu Zhengjun, Liu Yufei

2020

NIR hyperspectroscopy, attention mechanism, convolutional neural network, feature selection, wheat kernel classification

General General

Circulating Neutrophil Extracellular Traps Signature for Identifying Organ Involvement and Response to Glucocorticoid in Adult-Onset Still's Disease: A Machine Learning Study.

In Frontiers in immunology ; h5-index 100.0

Adult-onset Still's disease (AOSD) is an autoinflammatory disease with multisystem involvement. Early identification of patients with severe complications and those refractory to glucocorticoid is crucial to improve therapeutic strategy in AOSD. Exaggerated neutrophil activation and enhanced formation of neutrophil extracellular traps (NETs) in patients with AOSD were found to be closely associated with etiopathogenesis. In this study, we aim to investigate, to our knowledge for the first time, the clinical value of circulating NETs by machine learning to distinguish AOSD patients with organ involvement and refractory to glucocorticoid. Plasma samples were used to measure cell-free DNA, NE-DNA, MPO-DNA, and citH3-DNA complexes from training and validation sets. The training set included 40 AOSD patients and 24 healthy controls (HCs), and the validation set included 26 AOSD patients and 16 HCs. Support vector machines (SVM) were used for modeling and validation of circulating NETs signature for the diagnosis of AOSD and identifying patients refractory to low-dose glucocorticoid treatment. The training set was used to build a model, and the validation set was used to test the predictive capacity of the model. A total of four circulating NETs showed similar trends in different individuals and could distinguish patients with AOSD from HCs by SVM (AUC value: 0.88). Circulating NETs in plasma were closely correlated with systemic score, laboratory tests, and cytokines. Moreover, circulating NETs had the potential to distinguish patients with liver and cardiopulmonary system involvement. Furthermore, the AUC value of combined NETs to identify patients who were refractory to low-dose glucocorticoid was 0.917. In conclusion, circulating NETs signature provide added clinical value in monitoring AOSD patients. It may provide evidence to predict who is prone to be refractory to low-dose glucocorticoid and help to make efficient therapeutic strategy.

Jia Jinchao, Wang Mengyan, Ma Yuning, Teng Jialin, Shi Hui, Liu Honglei, Sun Yue, Su Yutong, Meng Jianfen, Chi Huihui, Chen Xia, Cheng Xiaobing, Ye Junna, Liu Tingting, Wang Zhihong, Wan Liyan, Zhou Zhuochao, Wang Fan, Yang Chengde, Hu Qiongyi

2020

adult-onset Still’s disease, circulating neutrophil extracellular traps, machine learning, organ involvement, response to glucocorticoid

General General

Soil Bacterial and Fungal Richness Forecast Patterns of Early Pine Litter Decomposition.

In Frontiers in microbiology

Discovering widespread microbial processes that drive unexpected variation in carbon cycling may improve modeling and management of soil carbon (Prescott, 2010; Wieder et al., 2015a, 2018). A first step is to identify community features linked to carbon cycle variation. We addressed this challenge using an epidemiological approach with 206 soil communities decomposing Ponderosa pine litter in 618 microcosms. Carbon flow from litter decomposition was measured over a 6-week incubation. Cumulative CO2 from microbial respiration varied two-fold among microcosms and dissolved organic carbon (DOC) from litter decomposition varied five-fold, demonstrating large functional variation despite constant environmental conditions where strong selection is expected. To investigate microbial features driving DOC concentration, two microbial community cohorts were delineated as "high" and "low" DOC. For each cohort, communities from the original soils and from the final microcosm communities after the 6-week incubation with litter were taxonomically profiled. A logistic model including total biomass, fungal richness, and bacterial richness measured in the original soils or in the final microcosm communities predicted the DOC cohort with 72 (P < 0.05) and 80 (P < 0.001) percent accuracy, respectively. The strongest predictors of the DOC cohort were biomass and either fungal richness (in the original soils) or bacterial richness (in the final microcosm communities). Successful forecasting of functional patterns after lengthy community succession in a new environment reveals strong historical contingencies. Forecasting future community function is a key advance beyond correlation of functional variance with end-state community features. The importance of taxon richness-the same feature linked to carbon fate in gut microbiome studies-underscores the need for increased understanding of biotic mechanisms that can shape richness in microbial communities independent of physicochemical conditions.

Albright Michaeline B N, Johansen Renee, Thompson Jaron, Lopez Deanna, Gallegos-Graves La V, Kroeger Marie E, Runde Andreas, Mueller Rebecca C, Washburne Alex, Munsky Brian, Yoshida Thomas, Dunbar John

2020

community features, litter, machine learning, microbiome, modeling, pine, prediction, soil carbon cycling

General General

The Spontaneous Activity Pattern of the Middle Occipital Gyrus Predicts the Clinical Efficacy of Acupuncture Treatment for Migraine Without Aura.

In Frontiers in neurology

The purpose of the present study was to explore whether and to what extent the neuroimaging markers could predict the relief of the symptoms of patients with migraine without aura (MWoA) following a 4-week acupuncture treatment period. In study 1, the advanced multivariate pattern analysis was applied to perform a classification analysis between 40 patients with MWoA and 40 healthy subjects (HS) based on the z-transformed amplitude of low-frequency fluctuation (zALFF) maps. In study 2, the meaningful classifying features were selected as predicting features and the support vector regression models were constructed to predict the clinical efficacy of acupuncture in reducing the frequency of migraine attacks and headache intensity in 40 patients with MWoA. In study 3, a region of interest-based comparison between the pre- and post-treatment zALFF maps was conducted in 33 patients with MwoA to assess the changes in predicting features after acupuncture intervention. The zALFF value of the foci in the bilateral middle occipital gyrus, right fusiform gyrus, left insula, and left superior cerebellum could discriminate patients with MWoA from HS with higher than 70% accuracy. The zALFF value of the clusters in the right and left middle occipital gyrus could effectively predict the relief of headache intensity (R2 = 0.38 ± 0.059, mean squared error = 2.626 ± 0.325) and frequency of migraine attacks (R2 = 0.284 ± 0.072, mean squared error = 20.535 ± 2.701) after the 4-week acupuncture treatment period. Moreover, the zALFF values of these two clusters were both significantly reduced after treatment. The present study demonstrated the feasibility and validity of applying machine learning technologies and individual cerebral spontaneous activity patterns to predict acupuncture treatment outcomes in patients with MWoA. The data provided a quantitative benchmark for selecting acupuncture for MWoA.

Yin Tao, Sun Guojuan, Tian Zilei, Liu Mailan, Gao Yujie, Dong Mingkai, Wu Feng, Li Zhengjie, Liang Fanrong, Zeng Fang, Lan Lei

2020

acupuncture, amplitude of low-frequency fluctuation, efficacy prediction, machine learning, migraine without aura, multivariate pattern analysis

General General

Surface Electromyography: What Limits Its Use in Exercise and Sport Physiology?

In Frontiers in neurology

The aim of the present paper is to examine to what extent the application of surface electromyography (sEMG) in the field of exercise and, more in general, of human movement, is adopted by professionals on a regular basis. For this purpose, a brief history of the recent developments of modern sEMG techniques will be assessed and evaluated for a potential use in exercise physiology and clinical biomechanics. The idea is to understand what are the limitations that impede the translation of sEMG to applied fields such as exercise physiology. A cost/benefits evaluation will be drawn in order to understand possible causes that prevents sEMG from being routinely adopted. Among the possible causative factors, educational, economic and technical issues will be considered. Possible corrective interventions will be proposed. We will also give an overview of the parameters that can be extracted from the decomposition of the sHDEMG signals and how this can be related by professionals for assessing the health and disease of the neuromuscular system. We discuss how the decomposition of surface EMG signals might be adopted as a new non-invasive tool for assessing the status of the neuromuscular system. Recent evidences show that is possible to monitor the changes in neuromuscular function after training of longitudinally tracked populations of motoneurons, predict the maximal rate of force development by an individual via motoneuron interfacing, and identify possible causal relations between aging and the decrease in motor performance. These technologies will guide our understanding of motor control and provide a new window for the investigation of the underlying physiological processes determining force control, which is essential for the sport and exercise physiologist. We will also illustrate the challenges related to extraction of neuromuscular parameters from global EMG analysis (i.e., root-mean-square, and other global EMG metrics) and when the decomposition is needed. We posit that the main limitation in the application of sEMG techniques to the applied field is associated to problems in education and teaching, and that most of the novel technologies are not open source.

Felici Francesco, Del Vecchio Alessandro

2020

EMG, HDsEMG, biomechanics, exercise physiology, motor unit, sport

General General

Machine Learning Analysis of the Cerebrovascular Thrombi Proteome in Human Ischemic Stroke: An Exploratory Study.

In Frontiers in neurology

Objective: Mechanical retrieval of thrombotic material from acute ischemic stroke patients provides a unique entry point for translational research investigations. Here, we resolved the proteomes of cardioembolic and atherothrombotic cerebrovascular human thrombi and applied an artificial intelligence routine to examine protein signatures between the two selected groups. Methods: We specifically used n = 32 cardioembolic and n = 28 atherothrombotic diagnosed thrombi from patients suffering from acute stroke and treated by mechanical thrombectomy. Thrombi proteins were successfully separated by gel-electrophoresis. For each thrombi, peptide samples were analyzed by nano-flow liquid chromatography coupled to tandem mass spectrometry (nano-LC-MS/MS) to obtain specific proteomes. Relative protein quantification was performed using a label-free LFQ algorithm and all dataset were analyzed using a support-vector-machine (SVM) learning method. Data are available via ProteomeXchange with identifier PXD020398. Clinical data were also analyzed using SVM, alone or in combination with the proteomes. Results: A total of 2,455 proteins were identified by nano-LC-MS/MS in the samples analyzed, with 438 proteins constantly detected in all samples. SVM analysis of LFQ proteomic data delivered combinations of three proteins achieving a maximum of 88.3% for correct classification of the cardioembolic and atherothrombotic samples in our cohort. The coagulation factor XIII appeared in all of the SVM protein trios, associating with cardioembolic thrombi. A combined SVM analysis of the LFQ proteome and clinical data did not deliver a better discriminatory score as compared to the proteome only. Conclusion: Our results advance the portrayal of the human cerebrovascular thrombi proteome. The exploratory SVM analysis outlined sets of proteins for a proof-of-principle characterization of our cohort cardioembolic and atherothrombotic samples. The integrated analysis proposed herein could be further developed and retested on a larger patients population to better understand stroke origin and the associated cerebrovascular pathophysiology.

Dargazanli Cyril, Zub Emma, Deverdun Jeremy, Decourcelle Mathilde, de Bock Frédéric, Labreuche Julien, Lefèvre Pierre-Henri, Gascou Grégory, Derraz Imad, Riquelme Bareiro Carlos, Cagnazzo Federico, Bonafé Alain, Marin Philippe, Costalat Vincent, Marchi Nicola

2020

cerebrovascular, mechanical thrombectomy, neuroradiology, proteome, stroke, support vector machine learning, thrombus

General General

Fully Automated Breast Density Segmentation and Classification Using Deep Learning.

In Diagnostics (Basel, Switzerland)

Breast density estimation with visual evaluation is still challenging due to low contrast and significant fluctuations in the mammograms' fatty tissue background. The primary key to breast density classification is to detect the dense tissues in the mammographic images correctly. Many methods have been proposed for breast density estimation; nevertheless, most of them are not fully automated. Besides, they have been badly affected by low signal-to-noise ratio and variability of density in appearance and texture. This study intends to develop a fully automated and digitalized breast tissue segmentation and classification using advanced deep learning techniques. The conditional Generative Adversarial Networks (cGAN) network is applied to segment the dense tissues in mammograms. To have a complete system for breast density classification, we propose a Convolutional Neural Network (CNN) to classify mammograms based on the standardization of Breast Imaging-Reporting and Data System (BI-RADS). The classification network is fed by the segmented masks of dense tissues generated by the cGAN network. For screening mammography, 410 images of 115 patients from the INbreast dataset were used. The proposed framework can segment the dense regions with an accuracy, Dice coefficient, Jaccard index of 98%, 88%, and 78%, respectively. Furthermore, we obtained precision, sensitivity, and specificity of 97.85%, 97.85%, and 99.28%, respectively, for breast density classification. This study's findings are promising and show that the proposed deep learning-based techniques can produce a clinically useful computer-aided tool for breast density analysis by digital mammography.

Saffari Nasibeh, Rashwan Hatem A, Abdel-Nasser Mohamed, Kumar Singh Vivek, Arenas Meritxell, Mangina Eleni, Herrera Blas, Puig Domenec

2020-Nov-23

breast cancer, breast density, convolutional neural network, deep learning, generative adversarial networks, mammograms

Pathology Pathology

Identifying Predictors of Psychological Distress During COVID-19: A Machine Learning Approach.

In Frontiers in psychology ; h5-index 92.0

Scientific understanding about the psychological impact of the COVID-19 global pandemic is in its nascent stage. Prior research suggests that demographic factors, such as gender and age, are associated with greater distress during a global health crisis. Less is known about how emotion regulation impacts levels of distress during a pandemic. The present study aimed to identify predictors of psychological distress during the COVID-19 pandemic. Participants (N = 2,787) provided demographics, history of adverse childhood experiences, current coping strategies (use of implicit and explicit emotion regulation), and current psychological distress. The overall prevalence of clinical levels of anxiety, depression, and post-traumatic stress was higher than the prevalence outside a pandemic and was higher than rates reported among healthcare workers and survivors of severe acute respiratory syndrome. Younger participants (<45 years), women, and non-binary individuals reported higher prevalence of symptoms across all measures of distress. A random forest machine learning algorithm was used to identify the strongest predictors of distress. Regression trees were developed to identify individuals at greater risk for anxiety, depression, and post-traumatic stress. Somatization and less reliance on adaptive defense mechanisms were associated with greater distress. These findings highlight the importance of assessing individuals' physical experiences of psychological distress and emotion regulation strategies to help mental health providers tailor assessments and treatment during a global health crisis.

Prout Tracy A, Zilcha-Mano Sigal, Aafjes-van Doorn Katie, Békés Vera, Christman-Cohen Isabelle, Whistler Kathryn, Kui Thomas, Di Giuseppe Mariagrazia

2020

COVID-19 pandemic, anxiety, defense mechanisms, depression, emotion regulation, machine learning, post-traumatic stress, somatization

Pathology Pathology

Identifying Predictors of Psychological Distress During COVID-19: A Machine Learning Approach.

In Frontiers in psychology ; h5-index 92.0

Scientific understanding about the psychological impact of the COVID-19 global pandemic is in its nascent stage. Prior research suggests that demographic factors, such as gender and age, are associated with greater distress during a global health crisis. Less is known about how emotion regulation impacts levels of distress during a pandemic. The present study aimed to identify predictors of psychological distress during the COVID-19 pandemic. Participants (N = 2,787) provided demographics, history of adverse childhood experiences, current coping strategies (use of implicit and explicit emotion regulation), and current psychological distress. The overall prevalence of clinical levels of anxiety, depression, and post-traumatic stress was higher than the prevalence outside a pandemic and was higher than rates reported among healthcare workers and survivors of severe acute respiratory syndrome. Younger participants (<45 years), women, and non-binary individuals reported higher prevalence of symptoms across all measures of distress. A random forest machine learning algorithm was used to identify the strongest predictors of distress. Regression trees were developed to identify individuals at greater risk for anxiety, depression, and post-traumatic stress. Somatization and less reliance on adaptive defense mechanisms were associated with greater distress. These findings highlight the importance of assessing individuals' physical experiences of psychological distress and emotion regulation strategies to help mental health providers tailor assessments and treatment during a global health crisis.

Prout Tracy A, Zilcha-Mano Sigal, Aafjes-van Doorn Katie, Békés Vera, Christman-Cohen Isabelle, Whistler Kathryn, Kui Thomas, Di Giuseppe Mariagrazia

2020

COVID-19 pandemic, anxiety, defense mechanisms, depression, emotion regulation, machine learning, post-traumatic stress, somatization

General General

Parkinson's Disease Diagnosis and Severity Assessment Using Ground Reaction Forces and Neural Networks.

In Frontiers in physiology

Gait analysis plays a key role in the diagnosis of Parkinson's Disease (PD), as patients generally exhibit abnormal gait patterns compared to healthy controls. Current diagnosis and severity assessment procedures entail manual visual examinations of motor tasks, speech, and handwriting, among numerous other tests, which can vary between clinicians based on their expertise and visual observation of gait tasks. Automating gait differentiation procedure can serve as a useful tool in early diagnosis and severity assessment of PD and limits the data collection to solely walking gait. In this research, a holistic, non-intrusive method is proposed to diagnose and assess PD severity in its early and moderate stages by using only Vertical Ground Reaction Force (VGRF). From the VGRF data, gait features are extracted and selected to use as training features for the Artificial Neural Network (ANN) model to diagnose PD using cross validation. If the diagnosis is positive, another ANN model will predict their Hoehn and Yahr (H&Y) score to assess their PD severity using the same VGRF data. PD Diagnosis is achieved with a high accuracy of 97.4% using simple network architecture. Additionally, the results indicate a better performance compared to other complex machine learning models that have been researched previously. Severity Assessment is also performed on the H&Y scale with 87.1% accuracy. The results of this study show that it is plausible to use only VGRF data in diagnosing and assessing early stage Parkinson's Disease, helping patients manage the symptoms earlier and giving them a better quality of life.

Veeraragavan Srivardhini, Gopalai Alpha Agape, Gouwanda Darwin, Ahmad Siti Anom

2020

Parkinson’s Disease, SMOTE, artificial neural {network (ANN)}, gait analysis, machine learning

General General

Machine Learning Classification Identifies Cerebellar Contributions to Early and Moderate Cognitive Decline in Alzheimer's Disease.

In Frontiers in aging neuroscience ; h5-index 64.0

Alzheimer's disease (AD) is one of the most common forms of dementia, marked by progressively degrading cognitive function. Although cerebellar changes occur throughout AD progression, its involvement and predictive contribution in its earliest stages, as well as gray or white matter components involved, remains unclear. We used MRI machine learning-based classification to assess the contribution of two tissue components [volume fraction myelin (VFM), and gray matter (GM) volume] within the whole brain, the neocortex, the whole cerebellum as well as its anterior and posterior parts and their predictive contribution to the first two stages of AD and typically aging controls. While classification accuracy increased with AD stages, VFM was the best predictor for all early stages of dementia when compared with typically aging controls. However, we document overall higher cerebellar prediction accuracy when compared to the whole brain with distinct structural signatures of higher anterior cerebellar contribution to mild cognitive impairment (MCI) and higher posterior cerebellar contribution to mild/moderate stages of AD for each tissue property. Based on these different cerebellar profiles and their unique contribution to early disease stages, we propose a refined model of cerebellar contribution to early AD development.

Bruchhage Muriel M K, Correia Stephen, Malloy Paul, Salloway Stephen, Deoni Sean

2020

Alzheimer’s disease, MCI (mild cognitive impairment), cerebellum, dementia, gray matter (GM), machine learning, mild moderate AD, white matter (WM)

Surgery Surgery

Identifying Epilepsy Based on Deep Learning Using DKI Images.

In Frontiers in human neuroscience ; h5-index 79.0

Epilepsy is a serious hazard to human health. Minimally invasive surgery is currently an extremely effective treatment to refractory epilepsy. However, it is challenging to localize the lesion for most patients because they are MRI negative. The identification of epileptic foci in local brain region will be helpful to the localization of epileptic foci because we can infer whether there is a lesion from the results of the classification. For the sake of simplicity and the data we collected, only the hippocampus was segmented as a local brain region and classified in this paper. We recruited 59 children with hippocampus epilepsy and 70 age- and sex-matched normal controls, and diffusion kurtosis images (DKI) for all subjects were collected because DKI can understand the pathological changes of local tissues and other regions of epileptic foci at the molecular level. Then, a mask of hippocampus was made to segment the hippocampus of FA, MD, and MK images for all subjects, which are the parameter images of DKI and were used to perform the independent-sample t-test and the classification task. At last, a convolutional neural network (CNN) based on transfer learning technique was developed to extract features of FA, MD, MK, and the fusion of FA and MK, and support vector machine was employed to classify epilepsy and normal control. Finally, the classifier produced 90.8% accuracy for patient vs. normal controls. Experimental results showed that the features extraction based on CNN is very effective, and the high accuracy of classification means that FA and MK are two remarkable features to identify epilepsy, which indicates that DKI images can act as an important biomarker for epilepsy from the point of view of clinical diagnosis.

Huang Jianjun, Xu Jiahui, Kang Li, Zhang Tijiang

2020

MRI-negative, deep learning, diffusion kurtosis imaging, epilepsy, hippocampus

General General

Scientific progress and clinical uncertainty.

In European heart journal supplements : journal of the European Society of Cardiology

In the path underway towards Precision Medicine, two areas are in rapid development: genetics and artificial intelligence. In the genetic area, there are two current problems, both of the highest social importance. The first concerns the project, emerging in some countries, of systematic sequencing of the genome in the whole population. The problem is that reading the genome is very complex, requires specific knowledge, and the medical class is now unprepared. The second problem concerns the now achieved ability to modify the genome, which might be applied in the treatment of genetic diseases previously considered incurable. The techniques that can be used today are extremely delicate and expose to high risks. Artificial intelligence (AI) is a branch of neuroscience ('computational neuroscience') and advanced computer science which aims to apply the operational models of the human mind with the mnemonic and calculating power of advanced cybernetics. It is applied by conventional smartphone 'apps' to the most advanced computers used in various areas of diagnostic and prognostic medicine, image reading, big data management, setting of new pharmacological molecules, up to completely different applications, such as spoken language, automatic driving of vehicles, insurance plans, financial strategies, etc. Of course, with enormously different degrees of complexity. Will the doctors' role survive?

Tavazzi Luigi

2020-Nov

Artificial intelligence, Deep learning, Gene-editing, Genome, Genotyping, Machine learning

Ophthalmology Ophthalmology

Identifying a Potential Key Gene, TIMP1, Associated with Liver Metastases of Uveal Melanoma by Weight Gene Co-Expression Network Analysis.

In OncoTargets and therapy

Purpose : Uveal melanoma (UM) is a primary intraocular tumor in adults, with a high percentage of metastases to the liver. Identifying potential key genes may provide information for early detection and prognosis of UM metastasis.

Patients and Methods : Differentially expressed genes (DEGs) were identified using the GSE22138 dataset. Weighted gene co-expression network analysis was used to construct co-expression modules. Functional enrichment analysis was performed for DEGs and genes of key modules. Hub genes were screened by co-expression network and protein-protein interaction network (PPI), and validated by survival analysis in The Cancer Genome Atlas database. Gene set enrichment analysis (GSEA) was used to explore the potential metastasis mechanism of UM. Transient transfection was used to investigate the effect of TIMP1 on the proliferation, migration, and invasion of UM cells.

Results : In total, 552 DEGs were identified between primary and metastatic UM and mainly enriched in extracellular matrix, cellular senescence and focal adhesion pathway. A weighted gene co‑expression network was built to identify key gene modules associated with UM metastasis (n=36). The turquoise module is positively correlated with metastasis and genes in this module were mainly enriched in peptidyl-tyrosine autophosphorylation and regulation of organ growth. The hub gene TIMP1 was screened out by co-expression network and PPI analysis. High expression of TIMP1 was related to p53 pathway by GSEA and short overall survival time. Experimental results indicated that overexpression of TIMP1 inhibited the proliferation and migration, while it had no significant effect on invasion of UM cells.

Conclusion : Our study indicates that TIMP1 might be associated with metastasis in UM, which might have important significance for identifying patients with high risk of metastasis and predicting the prognosis of UM.

Wang Ping, Yang Xuan, Zhou Nan, Wang Jinyuan, Li Yang, Liu Yueming, Xu Xiaolin, Wei Wenbin

2020

GO analysis, WGCNA, liver metastases, pathway analysis, uveal melanoma, weighted gene co-expression network analysis

Ophthalmology Ophthalmology

Initial Outcomes with Customized Myopic LASIK, Guided by Automated Ray Tracing Optimization: A Novel Technique.

In Clinical ophthalmology (Auckland, N.Z.)

Purpose : Safety and efficacy of a novel automated ray tracing optimization in customization of excimer ablation in myopic LASIK.

Methods : In a consecutive case series, 25 patients (50 eyes) undergoing femtosecond-laser-assisted myopic LASIK were evaluated. The novel, artificial-intelligence platform initially calculates the ablation profile based on a model eye for each case, based on interferometry axial length data. Low- and high-order aberration calculation is performed by raytracing based on wavefront and Scheimpflug tomography measurements, all from a single diagnostic device. Visual acuity, refractive error, keratometry, topography, high-order aberrations and contrast sensitivity were evaluated, over six months follow-up.

Results : Change from pre- to 6 months post-operative: mean refractive error improved from -5.06 ± 2.54 diopters (D) (range -8.0 to -0.50 D) to -0.11 ± 0.09 D (range -0.25 to + 0.25); refractive astigmatism from -1.07 ± 0.91 D (range -4.25 to 0 D) to -0.15 ± 0.04 D (range -0.25 to 0); and topographic astigmatism from -1.65 ± 0.85 D to -0.26 ± 0.11 D (range -0.60 to 0). About 65% of eyes gained one line of vision and 38% 2 lines. Pre- to post-operative high-order aberration average: RMSh changed from 0.25 um to 0.35 um. Contrast sensitivity improved post-operatively.

Conclusion : We report safe and effective preliminary outcomes with a novel excimer laser customization by ray tracing optimization, for myopic LASIK treatments, employing several independent up-till-now diagnostics and a customized eye model reference for each case. It bears the potential advantage through total eye aberration data and ray tracing refraction calculation to offer improved and more predictable visual outcomes.

Kanellopoulos Anastasios John

2020

customized excimer ablation, femtosecond-laser assisted myopic LASIK, ray tracing excimer customization, topography-guided, wavefront-guided

Public Health Public Health

Supporting elimination of lymphatic filariasis in Samoa by predicting locations of residual infection using machine learning and geostatistics.

In Scientific reports ; h5-index 158.0

The global elimination of lymphatic filariasis (LF) is a major focus of the World Health Organization. One key challenge is locating residual infections that can perpetuate the transmission cycle. We show how a targeted sampling strategy using predictions from a geospatial model, combining random forests and geostatistics, can improve the sampling efficiency for identifying locations with high infection prevalence. Predictions were made based on the household locations of infected persons identified from previous surveys, and environmental variables relevant to mosquito density. Results show that targeting sampling using model predictions would have allowed 52% of infections to be identified by sampling just 17.7% of households. The odds ratio for identifying an infected individual in a household at a predicted high risk compared to a predicted low risk location was 10.2 (95% CI 4.2-22.8). This study provides evidence that a 'one size fits all' approach is unlikely to yield optimal results when making programmatic decisions based on model predictions. Instead, model assumptions and definitions should be tailored to each situation based on the objective of the surveillance program. When predictions are used in the context of the program objectives, they can result in a dramatic improvement in the efficiency of locating infected individuals.

Mayfield Helen J, Sturrock Hugh, Arnold Benjamin F, Andrade-Pacheco Ricardo, Kearns Therese, Graves Patricia, Naseri Take, Thomsen Robert, Gass Katherine, Lau Colleen L

2020-Nov-25

Public Health Public Health

A genomic signature for accurate classification and prediction of clinical outcomes in cancer patients treated with immune checkpoint blockade immunotherapy.

In Scientific reports ; h5-index 158.0

Tumor mutational burden (TMB) is associated with clinical response to immunotherapy, but application has been limited to a subset of cancer patients. We hypothesized that advanced machine-learning and proper modeling could identify mutations that classify patients most likely to derive clinical benefits. Training data: Two sets of public whole-exome sequencing (WES) data for metastatic melanoma. Validation data: One set of public non-small cell lung cancer (NSCLC) data. Least Absolute Shrinkage and Selection Operator (LASSO) machine-learning and proper modeling were used to identify a set of mutations (biomarker) with maximum predictive accuracy (measured by AUROC). Kaplan-Meier and log-rank methods were used to test prediction of overall survival. The initial model considered 2139 mutations. After pruning, 161 mutations (11%) were retained. An optimal threshold of 0.41 divided patients into high-weight (HW) or low-weight (LW) TMB groups. Classification for HW-TMB was 100% (AUROC = 1.0) on melanoma learning/testing data; HW-TMB was a prognostic marker for longer overall survival. In validation data, HW-TMB was associated with survival (p = 0.0057) and predicted 6-month clinical benefit (AUROC = 0.83) in NSCLC. In conclusion, we developed and validated a 161-mutation genomic signature with "outstanding" 100% accuracy to classify melanoma patients by likelihood of response to immunotherapy. This biomarker can be adapted for clinical practice to improve cancer treatment and care.

Lu Mei, Wu Kuan-Han Hank, Trudeau Sheri, Jiang Margaret, Zhao Joe, Fan Elliott

2020-Nov-25

Radiology Radiology

Detection and classification of intracranial haemorrhage on CT images using a novel deep-learning algorithm.

In Scientific reports ; h5-index 158.0

A novel deep-learning algorithm for artificial neural networks (ANNs), completely different from the back-propagation method, was developed in a previous study. The purpose of this study was to assess the feasibility of using the algorithm for the detection of intracranial haemorrhage (ICH) and the classification of its subtypes, without employing the convolutional neural network (CNN). For the detection of ICH with the summation of all the computed tomography (CT) images for each case, the area under the ROC curve (AUC) was 0.859, and the sensitivity and the specificity were 78.0% and 80.0%, respectively. Regarding ICH localisation, CT images were divided into 10 subdivisions based on the intracranial height. With the subdivision of 41-50%, the best diagnostic performance for detecting ICH was obtained with AUC of 0.903, the sensitivity of 82.5%, and the specificity of 84.1%. For the classification of the ICH to subtypes, the accuracy rate for subarachnoid haemorrhage (SAH) was considerably excellent at 91.7%. This study revealed that our approach can greatly reduce the ICH diagnosis time in an actual emergency situation with a fairly good diagnostic performance.

Lee Ji Young, Kim Jong Soo, Kim Tae Yoon, Kim Young Soo

2020-Nov-25

General General

Pre-Existing Cardiovascular Conditions as Clinical Predictors of Myocarditis Reporting with Immune Checkpoint Inhibitors: A VigiBase Study.

In Cancers

Although rare, immune checkpoint inhibitor (ICI)-related myocarditis can be life-threatening, even fatal. In view of increased ICI prescription, identification of clinical risk factors for ICI-related myocarditis is of primary importance. This study aimed to assess whether pre-existing cardiovascular (CV) patient conditions are associated with the reporting of ICI-related myocarditis in VigiBase, the WHO global database of suspected adverse drug reactions (ADRs). In a (retrospective) matched case-control study, 108 cases of ICI-related myocarditis and 108 controls of ICI-related ADRs other than myocarditis were selected from VigiBase. Drugs labeled as treatment for CV conditions (used as a proxy for concomitant CV risk factors and/or CV diseases) were found to be associated more strongly with the reporting of ICI-related myocarditis than with other ICI-related ADRs (McNemar's chi-square test of marginal homogeneity: p = 0.026, Cramer's coefficient of effect size: Φ = 0.214). No significant association was found between pre-existing diabetes and ICI-related myocarditis reporting (McNemar's test of marginal homogeneity: p = 0.752). These findings offer an invitation for future prospective pharmacoepidemiological studies to assess the causal relationship between pre-existing CV conditions and myocarditis onset in a cohort of cancer patients followed during ICI treatment.

Noseda Roberta, Ruinelli Lorenzo, Gaag Linda C van der, Ceschi Alessandro

2020-Nov-23

(retrospective) matched case-control study, VigiBase, cardiovascular conditions, immune checkpoint inhibitor, myocarditis

Radiology Radiology

Machine-learning classification of texture features of portable chest X-ray accurately classifies COVID-19 lung infection.

In Biomedical engineering online

BACKGROUND : The large volume and suboptimal image quality of portable chest X-rays (CXRs) as a result of the COVID-19 pandemic could post significant challenges for radiologists and frontline physicians. Deep-learning artificial intelligent (AI) methods have the potential to help improve diagnostic efficiency and accuracy for reading portable CXRs.

PURPOSE : The study aimed at developing an AI imaging analysis tool to classify COVID-19 lung infection based on portable CXRs.

MATERIALS AND METHODS : Public datasets of COVID-19 (N = 130), bacterial pneumonia (N = 145), non-COVID-19 viral pneumonia (N = 145), and normal (N = 138) CXRs were analyzed. Texture and morphological features were extracted. Five supervised machine-learning AI algorithms were used to classify COVID-19 from other conditions. Two-class and multi-class classification were performed. Statistical analysis was done using unpaired two-tailed t tests with unequal variance between groups. Performance of classification models used the receiver-operating characteristic (ROC) curve analysis.

RESULTS : For the two-class classification, the accuracy, sensitivity and specificity were, respectively, 100%, 100%, and 100% for COVID-19 vs normal; 96.34%, 95.35% and 97.44% for COVID-19 vs bacterial pneumonia; and 97.56%, 97.44% and 97.67% for COVID-19 vs non-COVID-19 viral pneumonia. For the multi-class classification, the combined accuracy and AUC were 79.52% and 0.87, respectively.

CONCLUSION : AI classification of texture and morphological features of portable CXRs accurately distinguishes COVID-19 lung infection in patients in multi-class datasets. Deep-learning methods have the potential to improve diagnostic efficiency and accuracy for portable CXRs.

Hussain Lal, Nguyen Tony, Li Haifang, Abbasi Adeel A, Lone Kashif J, Zhao Zirun, Zaib Mahnoor, Chen Anne, Duong Tim Q

2020-Nov-25

COVID-19, Classification, Feature extraction, Machine learning, Morphological, Texture

General General

Artificial intelligence and thermodynamics help solving arson cases.

In Scientific reports ; h5-index 158.0

In arson cases, evidence such as DNA or fingerprints is often destroyed. One of the most important evidence modalities left is relating fire accelerants to a suspect. When gasoline is used as accelerant, the aim is to find a strong indication that a gasoline sample from a fire scene is related to a sample of a suspect. Gasoline samples from a fire scene are weathered, which prohibits a straightforward comparison. We combine machine learning, thermodynamic modeling, and quantum mechanics to predict the composition of unweathered gasoline samples starting from weathered ones. Our approach predicts the initial (unweathered) composition of the sixty main components in a weathered gasoline sample, with error bars of ca. 4% when weathered up to 80% w/w. This shows that machine learning is a valuable tool for predicting the initial composition of a weathered gasoline, and thereby relating samples to suspects.

Korver Sander, Schouten Eva, Moultos Othonas A, Vergeer Peter, Grutters Michiel M P, Peschier Leo J C, Vlugt Thijs J H, Ramdin Mahinder

2020-Nov-25

Radiology Radiology

Prognostic value of texture analysis from cardiac magnetic resonance imaging in patients with Takotsubo syndrome: a machine learning based proof-of-principle approach.

In Scientific reports ; h5-index 158.0

Cardiac magnetic resonance (CMR) imaging has become an important technique for non-invasive diagnosis of takotsubo syndrome (TTS). The long-term prognostic value of CMR imaging in TTS has not been fully elucidated yet. This study sought to evaluate the prognostic value of texture analysis (TA) based on CMR images in patients with TTS using machine learning. In this multicenter study (InterTAK Registry), we investigated CMR imaging data of 58 patients (56 women, mean age 68 ± 12 years) with TTS. CMR imaging was performed in the acute to subacute phase (median time after symptom onset 4 days) of TTS. TA of the left ventricle was performed using free-hand regions-of-interest in short axis late gadolinium-enhanced and on T2-weighted (T2w) images. A total of 608 TA features adding the parameters age, gender, and body mass index were included. Dimension reduction was performed removing TA features with poor intra-class correlation coefficients (ICC ≤ 0.6) and those being redundant (correlation matrix with Pearson correlation coefficient r > 0.8). Five common machine-learning classifiers (artificial neural network Multilayer Perceptron, decision tree J48, NaïveBayes, RandomForest, and Sequential Minimal Optimization) with tenfold cross-validation were applied to assess 5-year outcome including major adverse cardiac and cerebrovascular events (MACCE). Dimension reduction yielded 10 TA features carrying prognostic information, which were all based on T2w images. The NaïveBayes machine learning classifier showed overall best performance with a sensitivity of 82.9% (confidence interval (CI) 80-86.2), specificity of 83.7% (CI 75.7-92), and an area-under-the receiver operating characteristics curve of 0.88 (CI 0.83-0.92). This proof-of-principle study is the first to identify unique T2w-derived TA features that predict long-term outcome in patients with TTS. These features might serve as imaging prognostic biomarkers in TTS patients.

Mannil Manoj, Kato Ken, Manka Robert, von Spiczak Jochen, Peters Benjamin, Cammann Victoria L, Kaiser Christoph, Osswald Stefan, Nguyen Thanh Ha, Horowitz John D, Katus Hugo A, Ruschitzka Frank, Ghadri Jelena R, Alkadhi Hatem, Templin Christian

2020-Nov-25

Radiology Radiology

A National Survey on Safety Management at MR Imaging Facilities in Japan.

In Magnetic resonance in medical sciences : MRMS : an official journal of Japan Society of Magnetic Resonance in Medicine

PURPOSE : To investigate safety management at Japanese facilities performing human MRI studies.

MATERIALS AND METHODS : All Japanese facilities performing human MRI studies were invited to participate in a comprehensive survey that evaluated their MRI safety management. The survey used a questionnaire prepared with the cooperation of the Safety Committee of the Japanese Society for Magnetic Resonance in Medicine. The survey addressed items pertaining to the overall MRI safety management, questions on the occurrence of incidents, and questions specific to facility and MRI scanner or examination. The survey covered the period from October 2017 to September 2018. Automated machine learning was used to identify factors associated with major incidents.

RESULTS : Of 5914 facilities, 2015 (34%) responded to the questionnaire. There was a wide variation in the rate of compliance with MRI safety management items among the participating facilities. Among the facilities responding to this questionnaire, 5% reported major incidents and 27% reported minor incidents related to MRI studies. Most major incidents involved the administration of contrast agents. The most influential factor in major incidents was the total number of MRI studies performed at the facility; this number was significantly correlated with the risk of major incidents (P < 0.0001).

CONCLUSION : There were large variations in the safety standards applied at Japanese facilities performing clinical MRI studies. The total number of MRI studies performed at a facility affected the number of major incidents.

Azuma Minako, Kumamaru Kanako K, Hirai Toshinori, Khant Zaw Aung, Koba Ritsuko, Ijichi Shinpei, Jinzaki Masahiro, Murayama Sadayuki, Aoki Shigeki

2020-Nov-26

accident, examination, magnetic resonance imaging, safety

General General

Trust does not need to be human: it is possible to trust medical AI.

In Journal of medical ethics ; h5-index 34.0

In his recent article 'Limits of trust in medical AI,' Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human-human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making.

Ferrario Andrea, Loi Michele, Viganò Eleonora

2020-Nov-25

clinical ethics, information technology, philosophical ethics

General General

Predicting Volume of Distribution in Humans: Performance of in silico Methods for A Large Set of Structurally Diverse Clinical Compounds.

In Drug metabolism and disposition: the biological fate of chemicals

Volume of distribution at steady state (VD,ss) is one of the key pharmacokinetic parameters estimated during the drug discovery process. Despite considerable efforts to predict VD,ss, accuracy and choice of prediction methods remain a challenge, with evaluations constrained to a small set (<150) of compounds. To address these issues, a series of in silico methods for predicting human VD,ss directly from structure were evaluated using a large set of clinical compounds. Machine learning (ML) models were built to predict VD,ss directly, and to predict input parameters required for mechanistic and empirical VD,ss predictions. In addition, LogD, fraction unbound in plasma (fup) and blood to plasma partition ratio (BPR) were measured on 254 compounds to estimate impact of measured data on predictive performance of mechanistic models. Furthermore, impact of novel methodologies such as measuring partition (Kp) in adipocytes and myocytes (n=189) on VD,ss predictions was also investigated. In predicting VD,ss directly from chemical structures, both mechanistic or empirical scaling using a combination of predicted rat and dog VD,ss demonstrated comparable performance (62-71% within 3-fold). The direct ML model outperformed other in silico methods (75% within 3-fold, r2=0.5, AAFE=2.2) when built from a larger dataset. Scaling to human either from predicted VD,ss of rat or dog yielded poor results (<47% within 3-fold). Measured fup and BPR improved performance of mechanistic VD,ss predictions significantly (81% within 3-fold, r2=0.6, AAFE=2.0). Adipocyte intracellular Kp showed good correlation to the VD,ss, but was limited in estimating the compounds with low VD,ssSignificance Statement This work advances the in-silico prediction of VD,ss directly from structure and with the aid of in vitro data. Rigorous and comprehensive evaluation of various methods using a large set of clinical compounds (n=956) is presented. The scale of both techniques and number of compounds evaluated is far beyond any previously presented. The novel data set (n=254) generated using a single protocol for each in vitro assay reported in this study could further aid in advancing VD,ss prediction methodologies.

Murad Neha, Pasikanti Kishore K, Madej Benjamin D, Minnich Amanda, McComas Juliet M, Crouch Sabrinia, Polli Joseph W, Weber Andrew D

2020-Nov-25

Mathematical modeling, QSAR, drug discovery, drug distribution, physiologically-based pharmacokinetic modeling/PBPK, physiologically-based pharmacokinetics, plasma protein binding

Radiology Radiology

Machine learning: an approach to preoperatively predict PD-1/PD-L1 expression and outcome in intrahepatic cholangiocarcinoma using MRI biomarkers.

In ESMO open

OBJECTIVE : To investigate the preoperative predictive value of non-invasive imaging biomarkers for programmed cell death protein 1/programmed cell death protein ligand 1 (PD-1/PD-L1) expression and outcome in intrahepatic cholangiocarcinoma (ICC) using machine learning.

METHODS : PD-1/PD-L1 expression in 98 ICC patients was assessed by immunohistochemistry, and their prognostic effects were analysed using Cox regression and Kaplan-Meier analysis. Radiomic features were extracted from MRI in the arterial and portal vein phases, and three sets of Radiomics score (Radscore) with good performance were derived respectively as biomarkers for predicting PD-1, PD-L1 expression and overall survival (OS). PD-1 and PD-L1 expression models were developed using the Radscore (arterial phase), clinico-radiological factors and clinical factors, individually and in combination. The imaging-based OS predictive model was constructed by combining independent predictors among clinico-radiological, clinical factors and OS Radscore. Pathology-based OS model using pathological and clinical factors was also constructed and compared with imaging-based OS model.

RESULTS : The highest area under the curves of the models predicting PD-1 and PD-L1 expression was 0.897 and 0.890, respectively. PD-1+ and PD-L1+ cases had worse outcomes than negative cases. The 5-year survival rates of PD-1+ and PD-1- cases were 12.5% and 48.3%, respectively (p<0.05), whereas the 5-year survival was 21.9% and 39.4% for PD-L1+ and PD-L1- cases, respectively (p<0.05). The imaging-based OS model involved predictors of clinico-radiological 'imaging classification', radiomics 'Radscore' from arterial phase and carcinoembryonic antigen (CEA) level (C-index:0.721). It performed better than pathology-based model (C-index: 0.698) constructed by PD-1/PD-L1 expression status and CEA level. The imaging-based OS model is potential for practice when the pathology assay is unavailable and could divide ICC patients into high-risk and low-risk groups, with 1-year, 3-year and 5-year survival rates of 57.1%, 14.3% and 12.4%, and 87.8%, 63.3% and 55.3%, respectively (p<0.001).

CONCLUSIONS : MRI radiomics could derive promising and non-invasive biomarker in evaluating PD-1/PD-L1 expression and prognosis of ICC patients.

Zhang Jun, Wu Zhenru, Zhang Xin, Liu Siyun, Zhao Jian, Yuan Fang, Shi Yujun, Song Bin

2020-Nov

MRI, PD-1, PD-L1, intrahepatic cholangiocarcinoma, radiomics

Pathology Pathology

Single-cell peripheral immunoprofiling of Alzheimer's and Parkinson's diseases.

In Science advances

Peripheral blood mononuclear cells (PBMCs) may provide insight into the pathogenesis of Alzheimer's disease (AD) or Parkinson's disease (PD). We investigated PBMC samples from 132 well-characterized research participants using seven canonical immune stimulants, mass cytometric identification of 35 PBMC subsets, and single-cell quantification of 15 intracellular signaling markers, followed by machine learning model development to increase predictive power. From these, three main intracellular signaling pathways were identified specifically in PBMC subsets from people with AD versus controls: reduced activation of PLCγ2 across many cell types and stimulations and selectively variable activation of STAT1 and STAT5, depending on stimulant and cell type. Our findings functionally buttress the now multiply-validated observation that a rare coding variant in PLCG2 is associated with a decreased risk of AD. Together, these data suggest enhanced PLCγ2 activity as a potential new therapeutic target for AD with a readily accessible pharmacodynamic biomarker.

Phongpreecha Thanaphong, Fernandez Rosemary, Mrdjen Dunja, Culos Anthony, Gajera Chandresh R, Wawro Adam M, Stanley Natalie, Gaudilliere Brice, Poston Kathleen L, Aghaeepour Nima, Montine Thomas J

2020-Nov

General General

Artificial Intelligence Predicts Drug Treatment Response.

In Cancer discovery ; h5-index 105.0

A new artificial intelligence-based predictive modeling framework called DrugCell could accurately predict effective drugs and treatment combinations based on tumor genotype, according to a proof-of-concept analysis.

**

2020-Nov-25

Public Health Public Health

An overview of methods of fine and ultrafine particle collection for physicochemical characterisation and toxicity assessments.

In The Science of the total environment

Particulate matter (PM) is a crucial health risk factor for respiratory and cardiovascular diseases. The smaller size fractions, ≤2.5 μm (PM2.5; fine particles) and ≤0.1 μm (PM0.1; ultrafine particles), show the highest bioactivity but acquiring sufficient mass for in vitro and in vivo toxicological studies is challenging. We review the suitability of available instrumentation to collect the PM mass required for these assessments. Five different microenvironments representing the diverse exposure conditions in urban environments are considered in order to establish the typical PM concentrations present. The highest concentrations of PM2.5 and PM0.1 were found near traffic (i.e. roadsides and traffic intersections), followed by indoor environments, parks and behind roadside vegetation. We identify key factors to consider when selecting sampling instrumentation. These include PM concentration on-site (low concentrations increase sampling time), nature of sampling sites (e.g. indoors; noise and space will be an issue), equipment handling and power supply. Physicochemical characterisation requires micro- to milli-gram quantities of PM and it may increase according to the processing methods (e.g. digestion or sonication). Toxicological assessments of PM involve numerous mechanisms (e.g. inflammatory processes and oxidative stress) requiring significant amounts of PM to obtain accurate results. Optimising air sampling techniques are therefore important for the appropriate collection medium/filter which have innate physical properties and the potential to interact with samples. An evaluation of methods and instrumentation used for airborne virus collection concludes that samplers operating cyclone sampling techniques (using centrifugal forces) are effective in collecting airborne viruses. We highlight that predictive modelling can help to identify pollution hotspots in an urban environment for the efficient collection of PM mass. This review provides guidance to prepare and plan efficient sampling campaigns to collect sufficient PM mass for various purposes in a reasonable timeframe.

Kumar Prashant, Kalaiarasan Gopinath, Porter Alexandra E, Pinna Alessandra, Kłosowski Michał M, Demokritou Philip, Chung Kian Fan, Pain Christopher, Arvind D K, Arcucci Rossella, Adcock Ian M, Dilliway Claire

2020-Nov-06

Artificial intelligence, Mass collection, Particulate matter, Physicochemical characteristics, Toxicological assessments, Ultrafine particles

Cardiology Cardiology

Machine learning integration of circulating and imaging biomarkers for explainable patient-specific prediction of cardiac events: A prospective study.

In Atherosclerosis ; h5-index 71.0

BACKGROUND AND AIMS : We sought to assess the performance of a comprehensive machine learning (ML) risk score integrating circulating biomarkers and computed tomography (CT) measures for the long-term prediction of hard cardiac events in asymptomatic subjects.

METHODS : We studied 1069 subjects (age 58.2 ± 8.2 years, 54.0% males) from the prospective EISNER trial who underwent coronary artery calcium (CAC) scoring CT, serum biomarker assessment, and long-term follow-up. Epicardial adipose tissue (EAT) was quantified from CT using fully automated deep learning software. Forty-eight serum biomarkers, both established and novel, were assayed. An ML algorithm (XGBoost) was trained using clinical risk factors, CT measures (CAC score, number of coronary lesions, aortic valve calcium score, EAT volume and attenuation), and circulating biomarkers, and validated using repeated 10-fold cross validation.

RESULTS : At 14.5 ± 2.0 years, there were 50 hard cardiac events (myocardial infarction or cardiac death). The ML risk score (area under the receiver operator characteristic curve [AUC] 0.81) outperformed the CAC score (0.75) and ASCVD risk score (0.74; both p = 0.02) for the prediction of hard cardiac events. Serum biomarkers provided incremental prognostic value beyond clinical data and CT measures in the ML model (net reclassification index 0.53 [95% CI: 0.23-0.81], p < 0.0001). Among novel biomarkers, MMP-9, pentraxin 3, PIGR, and GDF-15 had highest variable importance for ML and reflect the pathways of inflammation, extracellular matrix remodeling, and fibrosis.

CONCLUSIONS : In this prospective study, ML integration of novel circulating biomarkers and noninvasive imaging measures provided superior long-term risk prediction for cardiac events compared to current risk assessment tools.

Tamarappoo Balaji K, Lin Andrew, Commandeur Frederic, McElhinney Priscilla A, Cadet Sebastien, Goeller Markus, Razipour Aryabod, Chen Xi, Gransar Heidi, Cantu Stephanie, Miller Robert Jh, Achenbach Stephan, Friedman John, Hayes Sean, Thomson Louise, Wong Nathan D, Rozanski Alan, Slomka Piotr J, Berman Daniel S, Dey Damini

2020-Nov-13

Artificial intelligence, Cardiac computed tomography, Cardiovascular risk stratification, Machine learning, Serum biomarkers

Surgery Surgery

The Costs of Breast Reconstruction and Implications for Episode-Based Bundled Payment Models.

In Plastic and reconstructive surgery ; h5-index 62.0

BACKGROUND : Implementation of payment reform for breast reconstruction following mastectomy demands a comprehensive understanding of costs related to the complex process of reconstruction. Bundled payments for services to women with breast cancer may profoundly impact reimbursement and access to breast reconstruction. The authors' objectives were to determine the contribution of cancer therapies, comorbidities, revisions, and complications to costs following immediate reconstruction and the optimal duration of episodes to incentivize cost containment for bundled payment models.

METHODS : The cohort was composed of women who underwent immediate breast reconstruction between 2009 and 2016 from the MarketScan Commercial Claims and Encounters database. Continuous enrollment for 3 months before and 24 months after reconstruction was required. Total costs were calculated within predefined episodes (30 days, 90 days, 1 year, and 2 years). Multivariable models assessed predictors of costs.

RESULTS : Among 15,377 women in the analytic cohort, 11,592 (75 percent) underwent tissue expander, 1279 (8 percent) underwent direct-to-implant, and 2506 (16 percent) underwent autologous reconstruction. Adjuvant therapies increased costs at 1 year [tissue expander, $39,978 (p < 0.001); direct-to-implant, $34,365 (p < 0.001); and autologous, $29,226 (p < 0.001)]. At 1 year, most patients had undergone tissue expander exchange (76 percent) and revisions (81 percent), and a majority of complications had occurred (87 percent). Comorbidities, revisions, and complications increased costs for all episode scenarios.

CONCLUSIONS : Episode-based bundling should consider separate bundles for medical and surgical care with adjustment for procedure type, cancer therapies, and comorbidities to limit the adverse impact on access to reconstruction. The authors' findings suggest that a 1-year time horizon may optimally capture reconstruction events and complications.

Berlin Nicholas L, Chung Kevin C, Matros Evan, Chen Jung-Sheng, Momoh Adeyiza O

2020-Dec

General General

Evaluation of cuff deflation and inflation rates on a deep learning-based automatic blood pressure measurement method: a pilot evaluation study.

In Blood pressure monitoring

OBJECTIVE : The aim of this study was to evaluate the performance of using a deep learning-based method for measuring SBPs and DBPs and the effects of cuff inflation and deflation rates on the deep learning-based blood pressure (BP) measurement (in comparison with the manual auscultatory method).

METHODS : Forty healthy subjects were recruited. SBP and DBP were measured under four conditions (i.e. standard deflation, fast deflation, slow inflation and fast inflation) using both our newly developed deep learning-based method and the reference manual auscultatory method. The BPs measured under each condition were compared between the two methods. The performance of using the deep learning-based method to measure BP changes was also evaluated.

RESULTS : There were no significant BP differences between the two methods (P > 0.05), except for the DBPs measured during the slow and fast inflation conditions. By applying the deep learning-based method, SBPs measured from fast deflation, slow inflation and fast inflation decreased significantly by 3.0, 3.5 and 4.7 mmHg (all P < 0.05), respectively, in comparison with the standard deflation condition. Whereas, corresponding DBPs measured from the slow and fast inflation conditions increased significantly by 5.0 and 6.8 mmHg, respectively (both P < 0.05). There were no significant differences in BP changes measured by the two methods in most cases (all P > 0.05, except for DBP change in the slow and fast inflation conditions).

CONCLUSION : This study demonstrated that the deep learning-based method can achieve accurate BP measurement under the deflation and inflation conditions with different rates.

Pan Fan, He Peiyu, Chen Fei, Xu Yuhang, Zhao Qijun, Sun Ping, Zheng Dingchang

2020-Nov-23

General General

A Deep Learning Pipeline to Automate High-Resolution Arterial Segmentation with or without Intravenous Contrast.

In Annals of surgery ; h5-index 104.0

BACKGROUND : Existing methods to reconstruct vascular structures from a computerized tomography (CT) angiogram rely on contrast injection to enhance the radio-density within the vessel lumen. However, pathological changes in the vasculature may be present that prevent accurate reconstruction. In aortic aneurysmal disease, a thrombus adherent to the aortic wall within the expanding aneurysmal sac is present in > 90% of cases. These deformations prevent the automatic extraction of vital clinical information by existing image reconstruction methods.

AIM : In this study, a deep learning architecture consisting of a modified U-Net with attention-gating was implemented to establish a high-throughput and automated segmentation pipeline of pathological blood vessels in CT images acquired with or without the use of a contrast agent.

METHODS AND RESULTS : Seventy-Five patients with paired non-contrast and contrast-enhanced CT images were randomly selected from an ongoing study (Ethics Ref 13/SC/0250), manually annotated and used for model training and evaluation. Data augmentation was implemented to diversify the training data set in a ratio of 10:1. The performance of our Attention-based U-Net in extracting both the inner (blood flow) lumen and the wall structure of the aortic aneurysm from CT angiograms (CTA) was compared against a generic 3-D U-Net and displayed superior results. Implementation of this network within the aortic segmentation pipeline for both contrast and non-contrast CT images has allowed for accurate and efficient extraction of the morphological and pathological features of the entire aortic volume.

CONCLUSION : This extraction method can be used to standardize aneurysmal disease management and sets the foundation for complex geometric and morphological analysis. Furthermore, this pipeline can be extended to other vascular pathologies.

Chandrashekar Anirudh, Handa Ashok, Shivakumar Natesh, Lapolla Pierfrancesco, Uberoi Raman, Grau Vicente, Lee Regent

2020-Nov-23

General General

Correspondence between monkey visual cortices and layers of a saliency map model based on a deep convolutional neural network for representations of natural images.

In eNeuro

Attentional selection is a function that allocates the brain's computational resources to the most important part of a visual scene at a specific moment. Saliency map models have been proposed as computational models to predict attentional selection within a spatial location. Recent saliency map models based on deep convolutional neural networks (DCNNs) exhibit the highest performance for predicting the location of attentional selection and human gaze, which reflect overt attention. Trained DCNNs potentially provide insight into the perceptual mechanisms of biological visual systems. However, the relationship between artificial and neural representations used for determining attentional selection and gaze location remains unknown. To understand the mechanism underlying saliency map models based on DCNNs and the neural system of attentional selection, we investigated the correspondence between layers of a DCNN saliency map model and monkey visual areas for natural image representations. We compared the characteristics of the responses in each layer of the model with those of the neural representation in the primary visual (V1), intermediate visual (V4), and inferior temporal cortices. Regardless of the DCNN layer level, the characteristics of the responses were consistent with that of the neural representation in V1. We found marked peaks of correspondence between V1 and the early level and higher-intermediate-level layers of the model. These results provide insight into the mechanism of the trained DCNN saliency map model and suggest that the neural representations in V1 play an important role in computing the saliency that mediates attentional selection, which supports the V1 saliency hypothesis.Significance Statement Trained deep convolutional neural networks (DCNNs) potentially provide insight into the perceptual mechanisms of biological visual systems. However, the relationship between artificial and neural representations for determining attentional selection and gaze location has not been identified. We compared the characteristics of the responses in each layer of a DCNN model for predicting attentional selection with those of the neural representation in visual cortices. We found that the characteristics of the responses in the trained DCNN model for attentional selection were consistent with that of the representation in the primary visual cortex (V1), suggesting that the activities in V1 underlie the neural representations of saliency in the visual field to exogenously guide attentional selection. This study supports the V1 saliency hypothesis.

Wagatsuma Nobuhiko, Hidaka Akinori, Tamura Hiroshi

2020-Nov-24

Attention, Computational model, Deep learning, Saliency map, V1 saliency hypothesis, Visual system

General General

Prediction of hypotension events with physiologic vital sign signatures in the intensive care unit.

In Critical care (London, England)

BACKGROUND : Even brief hypotension is associated with increased morbidity and mortality. We developed a machine learning model to predict the initial hypotension event among intensive care unit (ICU) patients and designed an alert system for bedside implementation.

MATERIALS AND METHODS : From the Medical Information Mart for Intensive Care III (MIMIC-3) dataset, minute-by-minute vital signs were extracted. A hypotension event was defined as at least five measurements within a 10-min period of systolic blood pressure ≤ 90 mmHg and mean arterial pressure ≤ 60 mmHg. Using time series data from 30-min overlapping time windows, a random forest (RF) classifier was used to predict risk of hypotension every minute. Chronologically, the first half of extracted data was used to train the model, and the second half was used to validate the trained model. The model's performance was measured with area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC). Hypotension alerts were generated using risk score time series, a stacked RF model. A lockout time were applied for real-life implementation.

RESULTS : We identified 1307 subjects (1580 ICU stays) as the hypotension group and 1619 subjects (2279 ICU stays) as the non-hypotension group. The RF model showed AUROC of 0.93 and 0.88 at 15 and 60 min, respectively, before hypotension, and AUPRC of 0.77 at 60 min before. Risk score trajectories revealed 80% and > 60% of hypotension predicted at 15 and 60 min before the hypotension, respectively. The stacked model with 15-min lockout produced on average 0.79 alerts/subject/hour (sensitivity 92.4%).

CONCLUSION : Clinically significant hypotension events in the ICU can be predicted at least 1 h before the initial hypotension episode. With a highly sensitive and reliable practical alert system, a vast majority of future hypotension could be captured, suggesting potential real-life utility.

Yoon Joo Heung, Jeanselme Vincent, Dubrawski Artur, Hravnak Marilyn, Pinsky Michael R, Clermont Gilles

2020-Nov-25

Artificial intelligence, Hypotension, Machine learning, Prediction

General General

Sequence-Based Deep Learning Frameworks on Enhancer-Promoter Interactions Prediction.

In Current pharmaceutical design ; h5-index 57.0

Enhancer-promoter interactions (EPIs) in the human genome are of great significance to transcriptional regulation which tightly controls gene expression. Identification of EPIs can help us better deciphering gene regulation and understanding disease mechanisms. However, experimental methods to identify EPIs are constrained by the fund, time and manpower while computational methods using DNA sequences and genomic features are viable alternatives. Deep learning methods have shown promising prospects in classification and efforts that have been utilized to identify EPIs. In this survey, we specifically focus on sequence-based deep learning methods and conduct a comprehensive review of the literatures of them. We first briefly introduce existing sequence-based frameworks on EPIs prediction and their technique details. After that, we elaborate on the dataset, pre-processing means and evaluation strategies. Finally, we discuss the challenges these methods are confronted with and suggest several future opportunities.

Min Xiaoping, Lu Fengqing, Li Chunyan

2020-11-23

Attention mechanism, Convolutional Neural Network, Deep learning, Enhancer-promoter interactions, Interpretable model\n, Prediction, Recurrent Neural Network, Sequence features, Word\nembedding

oncology Oncology

Enhancing digital tomosynthesis (DTS) for lung radiotherapy guidance using patient-specific deep learning model.

In Physics in medicine and biology

Digital tomosynthesis (DTS) has been proposed as a fast low-dose imaging technique for image-guided radiation therapy (IGRT). However, due to the limited scanning angle, DTS reconstructed by the conventional FDK method suffers from significant distortions and poor plane-to-plane resolutions without full volumetric information, which severely limits its capability for image guidance. Although existing deep learning-based methods showed feasibilities in restoring volumetric information in DTS, they ignored the inter-patient variabilities by training the model using group patients. Consequently, the restored images still suffered from blurred and inaccurate edges. In this study, we presented a DTS enhancement method based on a patient-specific deep learning model to recover the volumetric information in DTS images. The main idea is to use the patient-specific prior knowledge to train the model to learn the patient-specific correlation between DTS and the ground truth volumetric images. To validate the performance of the proposed method, we enrolled both simulated and real on-board projections from lung cancer patient data. Results demonstrated the benefits of the proposed method: (1) qualitatively, DTS enhanced by the proposed method shows CT-like high image quality with accurate and clear edges; (2) quantitatively, the enhanced DTS has low-intensity errors and high structural similarity with respect to the ground truth CT images; (3) in the tumor localization study, compared to the ground truth CT-CBCT registration, the enhanced DTS shows 3D localization errors of ≤0.7 mm and ≤1.6 mm for studies using simulated and real projections, respectively; and (4), the DTS enhancement is nearly real-time. Overall, the proposed method is effective and efficient in enhancing DTS to make it a valuable tool for IGRT applications.

Jiang Zhuoran, Yin Fang-Fang, Ge Yun, Ren Lei

2020-Nov-25

digital tomosynthesis, image enhancement, image-guided radiation therapy, limited angle, patient-specific learning

Surgery Surgery

Development of a machine learning algorithm to predict intubation among hospitalized patients with COVID-19.

In Journal of critical care ; h5-index 48.0

PURPOSE : The purpose of this study is to develop a machine learning algorithm to predict future intubation among patients diagnosed or suspected with COVID-19.

MATERIALS AND METHODS : This is a retrospective cohort study of patients diagnosed or under investigation for COVID-19. A machine learning algorithm was trained to predict future presence of intubation based on prior vitals, laboratory, and demographic data. Model performance was compared to ROX index, a validated prognostic tool for prediction of mechanical ventilation.

RESULTS : 4087 patients admitted to five hospitals between February 2020 and April 2020 were included. 11.03% of patients were intubated. The machine learning model outperformed the ROX-index, demonstrating an area under the receiver characteristic curve (AUC) of 0.84 and 0.64, and area under the precision-recall curve (AUPRC) of 0.30 and 0.13, respectively. In the Kaplan-Meier analysis, patients alerted by the model were more likely to require intubation during their admission (p < 0.0001).

CONCLUSION : In patients diagnosed or under investigation for COVID-19, machine learning can be used to predict future risk of intubation based on clinical data which are routinely collected and available in clinical setting. Such an approach may facilitate identification of high-risk patients to assist in clinical care.

Arvind Varun, Kim Jun S, Cho Brian H, Geng Eric, Cho Samuel K

2020-Nov-16

COVID-19, Intubation, Machine learning, Prediction, Respiratory distress

General General

A systematic review of center of pressure measures to quantify gait changes in older adults.

In Experimental gerontology ; h5-index 47.0

Measures of gait center of pressure (COP) can be recorded using simple available technologies in clinical settings and thus can be used to characterize gait quality in older adults and its relationship to falls. The aim of this systematic review was to investigate the association between measures of gait COP and aging and falls. A comprehensive search of electronic databases including MEDLINE, Embase, Cochrane Central Register of Controlled Trials, CINAHL (EBSCO), Ageline (EBSCO) and Scopus was performed. The initial search yielded 2809 papers. After removing duplicates and applying study inclusion/exclusion criteria, 34 papers were included in the review. Gait COP has been examined during three tasks: normal walking, gait initiation, and obstacle negotiation. The majority of studies examined mean COP position and velocity as outcome measures. Overall, gait in older adults was characterized by more medial COP trajectory in normal walking and lower average anterior-posterior and medio-lateral COP displacements and velocity in both gait initiation and obstacle crossing. Moreover, findings suggest that Tai chi training can enhance older adults' balance control during gait initiation as demonstrated by greater COP backward, medial and forward shift in all three phases of gait initiation.. These findings should be interpreted cautiously due to inadequacy of evidence as well as methodological limitations of the studies such as small sample size, limited numbers of 'fallers', lack of a control group, and lack of interpretation of COP outcomes with respect to fall risk. COP measures can be adopted to assess fall-related gait changes in older adults but more complex measures of COP that reveal the dynamic nature of COP behavior in step-to-step variations are needed to adequately characterize gait changes in older adults.

Mehdizadeh Sina, Van Ooteghem Karen, Gulka Heidi, Nabavi Hoda, Faieghi Mohammadreza, Taati Babak, Iaboni Andrea

2020-Nov-22

Biomechanics, Geriatrics, Stability, Walking

General General

Comprehensive Mapping of Key Regulatory Networks that Drive Oncogene Expression.

In Cell reports ; h5-index 119.0

Gene expression is controlled by the collective binding of transcription factors to cis-regulatory regions. Deciphering gene-centered regulatory networks is vital to understanding and controlling gene misexpression in human disease; however, systematic approaches to uncovering regulatory networks have been lacking. Here we present high-throughput interrogation of gene-centered activation networks (HIGAN), a pipeline that employs a suite of multifaceted genomic approaches to connect upstream signaling inputs, trans-acting TFs, and cis-regulatory elements. We apply HIGAN to understand the aberrant activation of the cytidine deaminase APOBEC3B, an intrinsic source of cancer hypermutation. We reveal that nuclear factor κB (NF-κB) and AP-1 pathways are the most salient trans-acting inputs, with minor roles for other inflammatory pathways. We identify a cis-regulatory architecture dominated by a major intronic enhancer that requires coordinated NF-κB and AP-1 activity with secondary inputs from distal regulatory regions. Our data demonstrate how integration of cis and trans genomic screening platforms provides a paradigm for building gene-centered regulatory networks.

Lin Lin, Holmes Benjamin, Shen Max W, Kammeron Darnell, Geijsen Niels, Gifford David K, Sherwood Richard I

2020-Nov-24

AP-1 signaling, CRISPR-Cas9 screening, NF-κB signaling, cytidine deaminase, in-cis and in-trans regulation

Radiology Radiology

Artificial intelligence to predict the BRAFV600E mutation in patients with thyroid cancer.

In PloS one ; h5-index 176.0

PURPOSE : To investigate whether a computer-aided diagnosis (CAD) program developed using the deep learning convolutional neural network (CNN) on neck US images can predict the BRAFV600E mutation in thyroid cancer.

METHODS : 469 thyroid cancers in 469 patients were included in this retrospective study. A CAD program recently developed using the deep CNN provided risks of malignancy (0-100%) as well as binary results (cancer or not). Using the CAD program, we calculated the risk of malignancy based on a US image of each thyroid nodule (CAD value). Univariate and multivariate logistic regression analyses were performed including patient demographics, the American College of Radiology (ACR) Thyroid Imaging, Reporting and Data System (TIRADS) categories and risks of malignancy calculated through CAD to identify independent predictive factors for the BRAFV600E mutation in thyroid cancer. The predictive power of the CAD value and final multivariable model for the BRAFV600E mutation in thyroid cancer were measured using the area under the receiver operating characteristic (ROC) curves.

RESULTS : In this study, 380 (81%) patients were positive and 89 (19%) patients were negative for the BRAFV600E mutation. On multivariate analysis, older age (OR = 1.025, p = 0.018), smaller size (OR = 0.963, p = 0.006), and higher CAD value (OR = 1.016, p = 0.004) were significantly associated with the BRAFV600E mutation. The CAD value yielded an AUC of 0.646 (95% CI: 0.576, 0.716) for predicting the BRAFV600E mutation, while the multivariable model yielded an AUC of 0.706 (95% CI: 0.576, 0.716). The multivariable model showed significantly better performance than the CAD value alone (p = 0.004).

CONCLUSION : Deep learning-based CAD for thyroid US can help us predict the BRAFV600E mutation in thyroid cancer. More multi-center studies with more cases are needed to further validate our study results.

Yoon Jiyoung, Lee Eunjung, Koo Ja Seung, Yoon Jung Hyun, Nam Kee-Hyun, Lee Jandee, Jo Young Suk, Moon Hee Jung, Park Vivian Youngjean, Kwak Jin Young

2020

General General

Identifying transcriptomic correlates of histology using deep learning.

In PloS one ; h5-index 176.0

Linking phenotypes to specific gene expression profiles is an extremely important problem in biology, which has been approached mainly by correlation methods or, more fundamentally, by studying the effects of gene perturbations. However, genome-wide perturbations involve extensive experimental efforts, which may be prohibitive for certain organisms. On the other hand, the characterization of the various phenotypes frequently requires an expert's subjective interpretation, such as a histopathologist's description of tissue slide images in terms of complex visual features (e.g. 'acinar structures'). In this paper, we use Deep Learning to eliminate the inherent subjective nature of these visual histological features and link them to genomic data, thus establishing a more precisely quantifiable correlation between transcriptomes and phenotypes. Using a dataset of whole slide images with matching gene expression data from 39 normal tissue types, we first developed a Deep Learning tissue classifier with an accuracy of 94%. Then we searched for genes whose expression correlates with features inferred by the classifier and demonstrate that Deep Learning can automatically derive visual (phenotypical) features that are well correlated with the transcriptome and therefore biologically interpretable. As we are particularly concerned with interpretability and explainability of the inferred histological models, we also develop visualizations of the inferred features and compare them with gene expression patterns determined by immunohistochemistry. This can be viewed as a first step toward bridging the gap between the level of genes and the cellular organization of tissues.

Badea Liviu, Stănescu Emil

2020

General General

SE-stacking: Improving user purchase behavior prediction by information fusion and ensemble learning.

In PloS one ; h5-index 176.0

Online shopping behavior has the characteristics of rich granularity dimension and data sparsity and presents a challenging task in e-commerce. Previous studies on user behavior prediction did not seriously discuss feature selection and ensemble design, which are important to improving the performance of machine learning algorithms. In this paper, we proposed an SE-stacking model based on information fusion and ensemble learning for user purchase behavior prediction. After successfully using the ensemble feature selection method to screen purchase-related factors, we used the stacking algorithm for user purchase behavior prediction. In our efforts to avoid the deviation of the prediction results, we optimized the model by selecting ten different types of models as base learners and modifying the relevant parameters specifically for them. Experiments conducted on a publicly available dataset show that the SE-stacking model can achieve a 98.40% F1 score, approximately 0.09% higher than the optimal base models. The SE-stacking model not only has a good application in the prediction of user purchase behavior but also has practical value when combined with the actual e-commerce scene. At the same time, this model has important significance in academic research and the development of this field.

Xu Jing, Wang Jie, Tian Ye, Yan Jiangpeng, Li Xiu, Gao Xin

2020

Radiology Radiology

Neuroimaging Markers for Studying Gulf-War Illness: Single-Subject Level Analytical Method Based on Machine Learning.

In Brain sciences

Gulf War illness (GWI) refers to the multitude of chronic health symptoms, spanning from fatigue, musculoskeletal pain, and neurological complaints to respiratory, gastrointestinal, and dermatologic symptoms experienced by about 250,000 GW veterans who served in the 1991 Gulf War (GW). Longitudinal studies showed that the severity of these symptoms often remain unchanged even years after the GW, and these veterans with GWI continue to have poorer general health and increased chronic medical conditions than their non-deployed counterparts. For better management and treatment of this condition, there is an urgent need for developing objective biomarkers that can help with simple and accurate diagnosis of GWI. In this study, we applied multiple neuroimaging techniques, including T1-weighted magnetic resonance imaging (T1W-MRI), diffusion tensor imaging (DTI), and novel neurite density imaging (NDI) to perform both a group-level statistical comparison and a single-subject level machine learning (ML) analysis to identify diagnostic imaging features of GWI. Our results supported NDI as the most sensitive in defining GWI characteristics. In particular, our classifier trained with white matter NDI features achieved an accuracy of 90% and F-score of 0.941 for classifying GWI cases from controls after the cross-validation. These results are consistent with our previous study which suggests that NDI measures are sensitive to the microstructural and macrostructural changes in the brain of veterans with GWI, which can be valuable for designing better diagnosis method and treatment efficacy studies.

Guan Yi, Cheng Chia-Hsin, Chen Weifan, Zhang Yingqi, Koo Sophia, Krengel Maxine, Janulewicz Patricia, Toomey Rosemary, Yang Ehwa, Bhadelia Rafeeque, Steele Lea, Kim Jae-Hun, Sullivan Kimberly, Koo Bang-Bon

2020-Nov-20

Gulf War illness, Kansas case criteria, MRI, diffusion, grey matter, machine learning, neurite density imaging, objective biomarker

General General

Using convolutional neural networks to decode EEG-based functional brain network with different severity of acrophobia.

In Journal of neural engineering ; h5-index 52.0

OBJECTIVE : The prevalence of acrophobia is high, especially with the rise of many high-rise buildings. In the recent few years, researchers have begun to analyse acrophobia from the neuroscience perspective, especially to improve the virtual reality exposure therapy (VRET). Electroencephalographic (EEG) is an informative neuroimaging technique, but it is rarely used for acrophobia. The purpose of this study is to evaluate the effectiveness of using EEGs to identify the degree of acrophobia objectively.

APPROACH : EEG data were collected by virtual reality (VR) exposure experiments. We classified all subjects' degrees of acrophobia into three categories, where their questionnaire scores and behavior data showed significant differences. Using synchronization likelihood, we computed the functional connectivity between each pair of channels and then obtained complex networks named functional brain networks (FBNs). Basic topological features and community structure features were extracted from the FBNs. Statistical results demonstrated that FBN features can be used to distinguish different groups of subjects. We trained machine learning (ML) algorithms with FBN features as inputs and trained convolutional neural networks (CNNs) with FBNs directly as inputs.

MAIN RESULTS : It turns out that using FBN to identify the severity of acrophobia is feasible. For ML algorithms, the community structure features of some cerebral cortex regions outperform typical topological features of the whole brain, in terms of classification accuracy. However, the performances of CNN algorithms are better than ML algorithms. The CNN with ResNet performs the best (accuracy reached 98.460.42%).

SIGNIFICANCE : These observations indicate that community structures of certain cerebral cortex regions could be used to identify the degree of acrophobia. The proposed CNN framework can provide objective feedback, which could help build closed-loop VRET portable systems.

Wang Qiaoxiu, Wang Hong, Hu Fo, Hua Chengcheng

2020-Nov-25

CNN, EEG, acrophobia, functional connectivity, virtual reality

General General

Copula-based Data Augmentation on a Deep Learning Architecture for Cardiac Sensor Fusion.

In IEEE journal of biomedical and health informatics

In the wake of Big Data, traditional Machine Learn-ing techniques are now often integrated in the clinical workflow. Despite more capable, Deep Learning methods are not equally accepted given their unsatiated need for great amounts of training data and transversal use of the same architectures in fundamentally different areas with weakly-substantiated adaptations. To address the former, a cardiorespiratory signal synthesizer was designed by conditional sampling from a multimodally trained stochastic system of Gaussian copulas integrated in a Markov chain. With respect to the latter, a multi-branch convolutional neural network architecture was conceived to learn the best cardiac sensor-fusion strategy at every abstraction layer. The network was tailored to the tasks of cycle detection and classification for different cardiac modality combinations by a synthesizer-based data augmentation training framework and Bayesian hyperparameter optimization. The synthesizer yielded highly realistic signals in the time, frequency and phase domains for both healthy and pathological heart cycles as well as artifacts of different modalities. Benchmarking suggested that the network is able to surpass previous architectures and data augmentation provided a performance boost in realistic data availability scenarios. These included insufficient training data volume, as low as 150 cycles long, artifact contamination and absence of a classification data type in training.

Silva Diogo, Leonhardt Steffen, Hoog Antink Christoph

2020-Nov-25

General General

Artificial Intelligence Based Blood Pressure Estimation From Auscultatory and Oscillometric Waveforms: A Methodological Review.

In IEEE reviews in biomedical engineering

Cardiovascular disease is the number one cause of death globally, with elevated blood pressure (BP) being the single largest risk factor. Hence, BP is an important physiological parameter used as an indicator of cardiovascular health. The use of automated non-invasive blood pressure (NIBP) measurement devices is growing, as measurements can be taken by patients at home. While the oscillometric technique is most common, some automated NIBP measurement methods have been developed based on the auscultatory technique. By utilizing (relatively) large BP data annotated by experts, models can be trained using machine learning and statistical concepts to develop novel NIBP estimation algorithms. Amongst artificial intelligence (AI) techniques, deep learning has received increasing attention in different fields due to its strength in data classification and feature extraction problems. This paper reviews AI-based BP estimation methods with a focus on recent advances in deep learning-based approaches within the field. Various architectures and methodologies proposed todate are discussed to clarify their strengths and weaknesses. Based on the literature reviewed, deep learning brings plausible benefits to the field of BP estimation. We also discuss some limitations which can hinder the widespread adoption of deep learning in the field and suggest frameworks to overcome these challenges.

Argha Ahmadreza, Celler Branko George, Lovell Nigel Hamilton

2020-Nov-25

General General

Two-Dimensional Stockwell Transform and Deep Convolutional Neural Network for Multi-Class Diagnosis of Pathological Brain.

In IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society

Since the brain lesion detection and classification is a vital diagnosis task, in this paper, the problem of brain magnetic resonance imaging (MRI) classification is investigated. Recent advantages in machine learning and deep learning allows the researchers to develop the robust computer-aided diagnosis (CAD) tools for classification of brain lesions. Feature extraction is an essential step in any machine learning scheme. Time-frequency analysis methods provide localized information that makes them more attractive for image classification applications. Owing to the advantages of two-dimensional discrete orthonormal Stockwell transform (2D DOST), we propose to use it to extract the efficient features from brain MRIs and obtain the feature matrix. Since there are some irrelevant features, two-directional two-dimensional principal component analysis ((2D)2PCA) is used to reduce the dimension of the feature matrix. Finally, convolution neural networks (CNNs) are designed and trained for MRI classification. Simulation results indicate that the proposed CAD tool outperforms the recently introduced ones and can efficiently diagnose the MRI scans.

Soleimani Mohsen, Vahidi Aram, Vaseghi Behrouz

2020-Nov-25

General General

Identification of Gait Events in Healthy Subjects and with Parkinson's Disease using Inertial Sensors: An Adaptive Unsupervised Learning Approach.

In IEEE transactions on neural systems and rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology Society

Automatic identification of gait events is an essential component of the control scheme of assistive robotic devices. Many available techniques suffer limitations for real-time implementations and in guaranteeing high performances when identifying events in subjects with gait impairments. Machine learning algorithms offer a solution by enabling the training of different models to represent the gait patterns of different subjects. Here our aim is twofold: to remove the need for training stages using unsupervised learning, and to modify the parameters according to the changes within a walking trial using adaptive procedures. We developed two adaptive unsupervised algorithms for real-time detection of four gait events, using only signals from two single-IMU foot-mounted wearable devices. We evaluated the algorithms using data collected from five healthy adults and seven subjects with Parkinson's disease (PD) walking overground and on a treadmill. Both algorithms obtained high performance in terms of accuracy (F1-score ≥ 0.95 for both groups), and timing agreement using a force-sensitive resistors as reference (mean absolute differences of 66±53 msec for the healthy group, and 58 ± 63 msec for the PD group). The proposed algorithms demonstrated the potential to learn optimal parameters for a particular participant and for detecting gait events without additional sensors, external labeling, or long training stages.

Pirez-Ibarra Juan C, Siqueira Adriano A G, Krebs Hermano I

2020-Nov-25

General General

DRCNN: Dynamic Routing Convolutional Neural Network for Multi-View 3D Object Recognition.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

3D object recognition is one of the most important tasks in 3D data processing, and has been extensively studied recently. Researchers have proposed various 3D recognition methods based on deep learning, among which a class of view-based approaches is a typical one. However, in the view-based methods, the commonly used view pooling layer to fuse multi-view features causes a loss of visual information. To alleviate this problem, in this paper, we construct a novel layer called Dynamic Routing Layer (DRL) by modifying the dynamic routing algorithm of capsule network, to more effectively fuse the features of each view. Concretely, in DRL, we use rearrangement and affine transformation to convert features, then leverage the modified dynamic routing algorithm to adaptively choose the converted features, instead of ignoring all but the most active feature in view pooling layer. We also illustrate that the view pooling layer is a special case of our DRL. In addition, based on DRL, we further present a Dynamic Routing Convolutional Neural Network (DRCNN) for multi-view 3D object recognition. Our experiments on three 3D benchmark datasets show that our proposed DRCNN outperforms many state-of-the-arts, which demonstrates the efficacy of our method.

Sun Kai, Zhang Jiangshe, Liu Junmin, Yu Ruixuan, Song Zengjie

2020-Nov-25

General General

Quantifying the Roles of Space and Stochasticity in Computer Simulations for Cell Biology and Cellular Biochemistry.

In Molecular biology of the cell ; h5-index 78.0

Most of the fascinating phenomena studied in cell biology emerge from interactions among highly organized multi-molecular structures embedded into complex and frequently dynamic cellular morphologies. For the exploration of such systems, computer simulation has proved to be an invaluable tool, and many researchers in this field have developed sophisticated computational models for application to specific cell biological questions. However, it is often difficult to reconcile conflicting computational results that use different approaches to describe the same phenomenon. To address this issue systematically, we have defined a series of computational test cases ranging from very simple to moderately complex, varying key features of dimensionality, reaction type, reaction speed, crowding, and cell size. We then quantified how explicit spatial and/or stochastic implementations alter outcomes, even when all methods use the same reaction network, rates, and concentrations. For simple cases we generally find minor differences in solutions of the same problem. However, we observe increasing discordance as the effects of localization, dimensionality reduction, and irreversible enzymatic reactions are combined. We discuss the strengths and limitations of commonly used computational approaches for exploring cell biological questions and provide a framework for decision-making by researchers developing new models. As computational power and speed continue to increase at a remarkable rate, the dream of a fully comprehensive computational model of a living cell may be drawing closer to reality, but our analysis demonstrates that it will be crucial to evaluate the accuracy of such models critically and systematically.

Johnson M E, Chen A, Faeder J R, Henning P, Moraru I I, Meier-Schellersheim M, Murphy R F, Prüstel T, Theriot J A, Uhrmacher A M

2020-Nov-25

Radiology Radiology

Multimodality cardiac imaging in the 21st century: evolution, advances and future opportunities for innovation.

In The British journal of radiology

Cardiovascular imaging has significantly evolved since the turn of the century. Progress in the last two decades has been marked by advances in every modality used to image the heart, including echocardiography, cardiac magnetic resonance, cardiac CT and nuclear cardiology. There has also been a dramatic increase in hybrid and fusion modalities that leverage the unique capabilities of two imaging techniques simultaneously, as well as the incorporation of artificial intelligence and machine learning into the clinical workflow. These advances in non-invasive cardiac imaging have guided patient management and improved clinical outcomes. The technological developments of the past 20 years have also given rise to new imaging subspecialities and increased the demand for dedicated cardiac imagers who are cross-trained in multiple modalities. This state-of-the-art review summarizes the evolution of multimodality cardiac imaging in the 21st century and highlights opportunities for future innovation.

Daubert Melissa A, Tailor Tina, James Olga, Shaw Leslee J, Douglas Pamela S, Koweek Lynne

2020-Nov-25

Radiology Radiology

Potential role of imaging for assessing acute pancreatitis-induced acute kidney injury.

In The British journal of radiology

Acute kidney injury (AKI) is a common complication of acute pancreatitis (AP) that is associated with increased mortality. Conventional assessment of AKI is based on changes in serum creatinine concentration and urinary output. However, these examinations have limited accuracy and sensitivity for the diagnosis of early-stage AKI. This review summarizes current evidence on the use of advanced imaging approaches and artificial intelligence (AI) for the early prediction and diagnosis of AKI in patients with AP. Computed tomography (CT) scores, CT post-processing technology, Doppler ultrasound, and AI technology provide increasingly valuable information for the diagnosis of AP-induced AKI. Magnetic resonance imaging (MRI) also has potential for the evaluation of AP-induced AKI. For the accurate diagnosis of early-stage AP-induced AKI, more studies are needed that use these new techniques and that use AI in combination with advanced imaging technologies.

Wang Yi, Liu Kaixiang, Xie Xisheng, Song Bin

2020-Nov-25

General General

[Endoscopic diagnosis, treatment, and follow-up of polyps of the lower gastrointestinal tract].

In Der Internist

BACKGROUND : The endoscopic management of polyps of the lower gastrointestinal tract (l-GIT) has emerged in recent years as a result of numerous technological innovations. However, proven expertise and experience are essential.

OBJECTIVES : Presentation of novel and standard techniques and best-practice recommendations for the characterization and resection of l‑GIT polyps.

METHODS : Recent specialist literature and current guidelines.

RESULTS : High-definition endoscopy should be the standard when performing colonoscopy. The (virtual) chromoendoscopy can improve detection and characterization of polyps, but always requires special expertise and experience of the endoscopist in advanced endoscopic imaging. In this regard, computer-aided-diagnosis (CAD) systems have the potential to support endoscopists in the future. Pedunculated polyps should be removed with a hot snare. Small flat polyps can be resected by cold snare or large forceps. Large, non-pedunculated polyps should be treated in an interdisciplinary approach at a referral center with long-standing experience depending on its malignancy potential. After complete resection of small adenoma without high grade dysplasia, surveillance endoscopy is recommended after 5-10 years. Patients with large adenoma or high grade dysplasia should undergo endoscopy after 3 years and patients with multiple adenoma earlier than 3 years. After incomplete or piecemeal resection or insufficient bowel preparation, near-term endoscopy is recommended.

CONCLUSIONS : Adequate characterization and treatment are essential for the appropriate management of l‑GIT polyps.

Hollenbach M, Feisthammel J, Hoffmeister A

2020-Nov-25

Adenoma, gastrointestinal, Artificial intelligence, Colonoscopy, Endoscopic mucosal resection, Endoscopic submucosal dissection

General General

Association between Structural Connectivity and Generalized Cognitive Spectrum in Alzheimer's Disease.

In Brain sciences

Modeling disease progression through the cognitive scores has become an attractive challenge in the field of computational neuroscience due to its importance for early diagnosis of Alzheimer's disease (AD). Several scores such as Alzheimer's Disease Assessment Scale cognitive total score, Mini Mental State Exam score and Rey Auditory Verbal Learning Test provide a quantitative assessment of the cognitive conditions of the patients and are commonly used as objective criteria for clinical diagnosis of dementia and mild cognitive impairment (MCI). On the other hand, connectivity patterns extracted from diffusion tensor imaging (DTI) have been successfully used to classify AD and MCI subjects with machine learning algorithms proving their potential application in the clinical setting. In this work, we carried out a pilot study to investigate the strength of association between DTI structural connectivity of a mixed ADNI cohort and cognitive spectrum in AD. We developed a machine learning framework to find a generalized cognitive score that summarizes the different functional domains reflected by each cognitive clinical index and to identify the connectivity biomarkers more significantly associated with the score. The results indicate that the efficiency and the centrality of some regions can effectively track cognitive impairment in AD showing a significant correlation with the generalized cognitive score (R = 0.7).

Lombardi Angela, Amoroso Nicola, Diacono Domenico, Monaco Alfonso, Logroscino Giancarlo, De Blasi Roberto, Bellotti Roberto, Tangaro Sabina

2020-Nov-20

alzheimer’s disease, biomarker identification, brain connectivity, diffusion tensor imaging, machine learning

Surgery Surgery

Development of a machine learning algorithm to predict intubation among hospitalized patients with COVID-19.

In Journal of critical care ; h5-index 48.0

PURPOSE : The purpose of this study is to develop a machine learning algorithm to predict future intubation among patients diagnosed or suspected with COVID-19.

MATERIALS AND METHODS : This is a retrospective cohort study of patients diagnosed or under investigation for COVID-19. A machine learning algorithm was trained to predict future presence of intubation based on prior vitals, laboratory, and demographic data. Model performance was compared to ROX index, a validated prognostic tool for prediction of mechanical ventilation.

RESULTS : 4087 patients admitted to five hospitals between February 2020 and April 2020 were included. 11.03% of patients were intubated. The machine learning model outperformed the ROX-index, demonstrating an area under the receiver characteristic curve (AUC) of 0.84 and 0.64, and area under the precision-recall curve (AUPRC) of 0.30 and 0.13, respectively. In the Kaplan-Meier analysis, patients alerted by the model were more likely to require intubation during their admission (p < 0.0001).

CONCLUSION : In patients diagnosed or under investigation for COVID-19, machine learning can be used to predict future risk of intubation based on clinical data which are routinely collected and available in clinical setting. Such an approach may facilitate identification of high-risk patients to assist in clinical care.

Arvind Varun, Kim Jun S, Cho Brian H, Geng Eric, Cho Samuel K

2020-Nov-16

COVID-19, Intubation, Machine learning, Prediction, Respiratory distress

General General

Unsupervised learning for economic risk evaluation in the context of Covid-19 pandemic

ArXiv Preprint

Justifying draconian measures during the Covid-19 pandemic was difficult not only because of the restriction of individual rights, but also because of its economic impact. The objective of this work is to present a machine learning approach to identify regions that should implement similar health policies. For that end, we successfully developed a system that gives a notion of economic impact given the prediction of new incidental cases through unsupervised learning and time series forecasting. This system was built taking into account computational restrictions and low maintenance requirements in order to improve the system's resilience. Finally this system was deployed as part of a web application for simulation and data analysis of COVID-19, in Colombia, available at (https://covid19.dis.eafit.edu.co).

Santiago Cortes, Yullys M. Quintero

2020-11-26

Surgery Surgery

[New techniques and training methods for robot-assisted surgery and cost-benefit analysis of Ivor Lewis esophagectomy].

In Der Chirurg; Zeitschrift fur alle Gebiete der operativen Medizen

INTRODUCTION : Robotic surgery was introduced into general surgery more than 20 years ago. Shortly afterwards, Horgan performed the first robotic-assisted esophagectomy in 2003 in Chicago. The aim of this manuscript is to elucidate new developments and training methods in robotic surgery with a cost-benefit analysis for robotic-assisted Ivor Lewis esophagectomy.

METHODS : Systematic literature search regarding new technology and training methods for robotic surgery and cost analysis of intraoperative materials for hybrid and robotic-assisted Ivor Lewis esophagectomy.

RESULTS : Robotic-assisted esophageal surgery is complex and involves an extensive learning curve, which can be shortened with modern teaching methods. New robotic systems aim at the use of image-guided surgery and artificial intelligence. Robotic-assisted surgery of esophageal cancer is significantly more expensive that surgery without this technology.

CONCLUSION : Oncological short-term and long-term benefits need to be further evaluated to support the higher cost of robotic esophageal cancer surgery.

Urbanski Alexander, Babic Benjamin, Schröder Wolfgang, Schiffmann Lars, Müller Dolores T, Bruns Christiane J, Fuchs Hans F

2020-Nov-25

Esophageal cancer, Hybrid esophagectomy, Medical techniques, Minimally invasive surgery, Robotics

General General

A neural network for prediction of risk of nosocomial infection at intensive care units: a didactic preliminary model.

In Einstein (Sao Paulo, Brazil)

OBJECTIVE : To propose a preliminary artificial intelligence model, based on artificial neural networks, for predicting the risk of nosocomial infection at intensive care units.

METHODS : An artificial neural network is designed that employs supervised learning. The generation of the datasets was based on data derived from the Japanese Nosocomial Infection Surveillance system. It is studied how the Java Neural Network Simulator learns to categorize these patients to predict their risk of nosocomial infection. The simulations are performed with several backpropagation learning algorithms and with several groups of parameters, comparing their results through the sum of the squared errors and mean errors per pattern.

RESULTS : The backpropagation with momentum algorithm showed better performance than the backpropagation algorithm. The performance improved with the xor. README file parameter values compared to the default parameters. There were no failures in the categorization of the patients into their risk of nosocomial infection.

CONCLUSION : While this model is still based on a synthetic dataset, the excellent performance observed with a small number of patterns suggests that using higher numbers of variables and network layers to analyze larger volumes of data can create powerful artificial neural networks, potentially capable of precisely anticipating nosocomial infection at intensive care units. Using a real database during the simulations has the potential to realize the predictive ability of this model.

Nistal-Nuño Beatriz

2020

General General

An artificial intelligence process of immunoassay for multiple biomarkers based on logic gates.

In The Analyst

We present a universal platform to synchronously analyze the possible existing state of two protein biomarkers. This platform is based on the integration of three logic gates: NAND, OR and NOT. These logic gates were constructed by the principle of immune recognition and fluorescence quenching between fluorescein labelled antibodies/antigens and antibody-conjugated graphene oxide (GO). An artificial intelligence (AI) protein analysis process was designed by us and accordingly a small program was written in JAVA. This protein analysis process with its JAVA code may be applied to give logic judgments on the possible existing state of two protein components. We expect that our fundamental research on multiple biomarker analysis can provide potential application in AI-assisted medical diagnosis with the interface for remote medical treatment.

Liu Wenjie, Liu Jihong, Huang Ao, Shi Shuo, Yao Tianming

2020-Nov-25

General General

Deep learning model predicts water interaction sites on the surface of proteins using limited-resolution data.

In Chemical communications (Cambridge, England) ; h5-index 131.0

We develop a residual deep learning model, hotWater (https://pypi.org/project/hotWater/), to identify key water interaction sites on proteins for binding models and drug discovery. This is tested on new crystal structures, as well as cryo-EM and NMR structures from the PDB and in crystallographic refinement with promising results.

Zaucha Jan, Softley Charlotte A, Sattler Michael, Frishman Dmitrij, Popowicz Grzegorz M

2020-Nov-25

General General

Machine Learning-Based Upscaling of Finite-Size Molecular Dynamics Diffusion Simulations for Binary Fluids.

In The journal of physical chemistry letters ; h5-index 129.0

Molecular diffusion coefficients calculated using molecular dynamics (MD) simulations suffer from finite-size (i.e., finite box size and finite particle number) effects. Results from finite-sized MD simulations can be upscaled to infinite simulation size by applying a correction factor. For self-diffusion of single-component fluids, this correction has been well-studied by many researchers including Yeh and Hummer (YH); for binary fluid mixtures, a modified YH correction was recently proposed for correcting MD-predicted Maxwell-Stephan (MS) diffusion rates. Here we use both empirical and machine learning methods to identify improvements to the finite-size correction factors for both self-diffusion and MS diffusion of binary Lennard-Jones (LJ) fluid mixtures. Using artificial neural networks (ANNs), the error in the corrected LJ fluid diffusion is reduced by an order of magnitude versus existing YH corrections, and the ANN models perform well for mixtures with large dissimilarities in size and interaction energies where the YH correction proves insufficient.

Leverant Calen J, Harvey Jacob A, Alam Todd M

2020-Nov-25

Radiology Radiology

Cardiovascular disease and stroke risk assessment in patients with chronic kidney disease using integration of estimated glomerular filtration rate, ultrasonic image phenotypes, and artificial intelligence: a narrative review.

In International angiology : a journal of the International Union of Angiology

Chronic kidney disease (CKD) and cardiovascular disease (CVD) together result in an enormous burden on global healthcare. The estimated glomerular filtration rate (eGFR) is a well-established biomarker of CKD and is associated with adverse cardiac events. This review highlights the link between eGFR reduction and that of atherosclerosis progression, which increases the risk of adverse cardiovascular events. In general, CVD risk assessments are performed using conventional risk prediction models. However, since these conventional models were developed for a specific cohort with a unique risk profile and further these models do not consider atherosclerotic plaque-based phenotypes, therefore, such models can either underestimate or overestimate the risk of CVD events. This review examines the approaches used for CVD risk assessments in CKD patients using the concept of integrated risk factors. An integrated risk factor approach is one that combines the effect of conventional risk predictors and noninvasive carotid ultrasound image-based phenotypes. Furthermore, this review provides insights into novel artificial intelligence methods, such as machine learning and deep learning algorithms, to carry out accurate and automated CVD risk assessments and survival analyses in patients with CKD.

Jamthikar Ankush D, Puvvula Anudeep, Gupta Deep, Johri Amer M, Nambi Vijay, Khanna Narendra N, Saba Luca, Mavrogeni Sophie, Laird John R, Pareek Gyan, Miner Martin, Sfikakis Petros, Protogerou Athanasios, Kitas George D, Nicolaides Andrew, Sharma Aditya M, Viswanathan Vijay, Rathore Vijay S, Kolluri Raghu, Bhatt Deepak L, Suri Jasjit S

2020-Nov-25

General General

Electroencephalography-Derived Prognosis of Functional Recovery in Acute Stroke Through Machine Learning Approaches.

In International journal of neural systems

Stroke, if not lethal, is a primary cause of disability. Early assessment of markers of recovery can allow personalized interventions; however, it is difficult to deliver indexes in the acute phase able to predict recovery. In this perspective, evaluation of electrical brain activity may provide useful information. A machine learning approach was explored here to predict post-stroke recovery relying on multi-channel electroencephalographic (EEG) recordings of few minutes performed at rest. A data-driven model, based on partial least square (PLS) regression, was trained on 19-channel EEG recordings performed within 10 days after mono-hemispheric stroke in 101 patients. The band-wise (delta: 1-4[Formula: see text]Hz, theta: 4-7[Formula: see text]Hz, alpha: 8-14[Formula: see text]Hz and beta: 15-30[Formula: see text]Hz) EEG effective powers were used as features to predict the recovery at 6 months (based on clinical status evaluated through the NIH Stroke Scale, NIHSS) in an optimized and cross-validated framework. In order to exploit the multimodal contribution to prognosis, the EEG-based prediction of recovery was combined with NIHSS scores in the acute phase and both were fed to a nonlinear support vector regressor (SVR). The prediction performance of EEG was at least as good as that of the acute clinical status scores. A posteriori evaluation of the features exploited by the analysis highlighted a lower delta and higher alpha activity in patients showing a positive outcome, independently of the affected hemisphere. The multimodal approach showed better prediction capabilities compared to the acute NIHSS scores alone ([Formula: see text] versus [Formula: see text], AUC = 0.80 versus AUC = 0.70, [Formula: see text]). The multimodal and multivariate model can be used in acute phase to infer recovery relying on standard EEG recordings of few minutes performed at rest together with clinical assessment, to be exploited for early and personalized therapies. The easiness of performing EEG may allow such an approach to become a standard-of-care and, thanks to the increasing number of labeled samples, further improving the model predictive power.

Chiarelli Antonio Maria, Croce Pierpaolo, Assenza Giovanni, Merla Arcangelo, Granata Giuseppe, Giannantoni Nadia Mariagrazia, Pizzella Vittorio, Tecchio Franca, Zappasodi Filippo

2020-Nov-25

Stroke, data-driven models, electroencephalography (EEG), machine learning, recovery prognosis

Radiology Radiology

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection.

In Korean journal of radiology

OBJECTIVE : To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD).

MATERIALS AND METHODS : Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated.

RESULTS : The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001).

CONCLUSION : The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.

Yu Yitong, Gao Yang, Wei Jianyong, Liao Fangzhou, Xiao Qianjiang, Zhang Jie, Yin Weihua, Lu Bin

2020-Nov-03

Aortic dissection, Deep learning, Tomography, X-ray computed

Radiology Radiology

Performance of Prediction Models for Diagnosing Severe Aortic Stenosis Based on Aortic Valve Calcium on Cardiac Computed Tomography: Incorporation of Radiomics and Machine Learning.

In Korean journal of radiology

OBJECTIVE : We aimed to develop a prediction model for diagnosing severe aortic stenosis (AS) using computed tomography (CT) radiomics features of aortic valve calcium (AVC) and machine learning (ML) algorithms.

MATERIALS AND METHODS : We retrospectively enrolled 408 patients who underwent cardiac CT between March 2010 and August 2017 and had echocardiographic examinations (240 patients with severe AS on echocardiography [the severe AS group] and 168 patients without severe AS [the non-severe AS group]). Data were divided into a training set (312 patients) and a validation set (96 patients). Using non-contrast-enhanced cardiac CT scans, AVC was segmented, and 128 radiomics features for AVC were extracted. After feature selection was performed with three ML algorithms (least absolute shrinkage and selection operator [LASSO], random forests [RFs], and eXtreme Gradient Boosting [XGBoost]), model classifiers for diagnosing severe AS on echocardiography were developed in combination with three different model classifier methods (logistic regression, RF, and XGBoost). The performance (c-index) of each radiomics prediction model was compared with predictions based on AVC volume and score.

RESULTS : The radiomics scores derived from LASSO were significantly different between the severe AS and non-severe AS groups in the validation set (median, 1.563 vs. 0.197, respectively, p < 0.001). A radiomics prediction model based on feature selection by LASSO + model classifier by XGBoost showed the highest c-index of 0.921 (95% confidence interval [CI], 0.869-0.973) in the validation set. Compared to prediction models based on AVC volume and score (c-indexes of 0.894 [95% CI, 0.815-0.948] and 0.899 [95% CI, 0.820-0.951], respectively), eight and three of the nine radiomics prediction models showed higher discrimination abilities for severe AS. However, the differences were not statistically significant (p > 0.05 for all).

CONCLUSION : Models based on the radiomics features of AVC and ML algorithms may perform well for diagnosing severe AS, but the added value compared to AVC volume and score should be investigated further.

Kang Nam Gyu, Suh Young Joo, Han Kyunghwa, Kim Young Jin, Choi Byoung Wook

2020-Nov-03

Aortic stenosis, Aortic valve calcium, Computed tomography, Machine learning, Radiomics

Radiology Radiology

Silver-Coated Disordered Silicon Nanowires Provide Highly Sensitive Label-Free Glycated Albumin Detection through Molecular Trapping and Plasmonic Hotspot Formation.

In Advanced healthcare materials

Glycated albumin (GA) is rapidly emerging as a robust biomarker for screening and monitoring of diabetes. To facilitate its rapid, point-of-care measurements, a label-free surface-enhanced Raman spectroscopy (SERS) sensing platform is reported that leverages the specificity of molecular vibrations and signal amplification on silver-coated silicon nanowires (Ag/SiNWs) for highly sensitive and reproducible quantification of GA. The simulations and experimental measurements demonstrate that the disordered orientation of the nanowires coupled with the wicking of the analyte molecules during the process of solvent evaporation facilitates molecular trapping at the generated plasmonic hotspots. Highly sensitive detection of glycated albumin is shown with the ability to visually detect spectral features at as low as 500 × 10-9 m, significantly below the physiological range of GA in body fluids. Combined with chemometric regression models, the spectral data recorded on the Ag/SiNWs also allow accurate prediction of glycated concentration in mixtures of glycated and non-glycated albumin in proportions that reflect those in the bloodstream.

Paria Debadrita, Convertino Annalisa, Mussi Valentina, Maiolo Luca, Barman Ishan

2020-Nov-25

biosensing, diabetes screening, glycated albumin, machine learning, nanowires, plasmonics, surface enhanced Raman spectroscopy (SERS)

Cardiology Cardiology

A Machine Learning Model Accurately Predicts Ulcerative Colitis Activity at One Year in Patients Treated with Anti-Tumour Necrosis Factor α Agents.

In Medicina (Kaunas, Lithuania)

Background and objectives: The biological treatment is a promising therapeutic option for ulcerative colitis (UC) patients, being able to induce subclinical and long-term remission. However, the relatively high costs and the potential toxicity have led to intense debates over the most appropriate criteria for starting, stopping, and managing biologics in UC. Our aim was to build a machine learning (ML) model for predicting disease activity at one year in UC patients treated with anti-Tumour necrosis factor α agents as a useful tool to assist the clinician in the therapeutic decisions. Materials and Methods: Clinical and biological parameters and the endoscopic Mayo score were collected from 55 UC patients at the baseline and one year follow-up. A neural network model was built using the baseline endoscopic activity and four selected variables as inputs to predict whether a UC patient will have an active or inactive endoscopic disease at one year, under the same therapeutic regimen. Results: The classifier achieved an excellent performance predicting the disease activity at one year with an accuracy of 90% and area under curve (AUC) of 0.92 on the test set and an accuracy of 100% and an AUC of 1 on the validation set. Conclusions: Our proposed ML solution may prove to be a useful tool in assisting the clinicians' decisions to increase the dose or switch to other biologic agents after the model's validation on independent, external cohorts of patients.

Popa Iolanda Valentina, Burlacu Alexandru, Mihai Catalina, Prelipcean Cristina Cijevschi

2020-Nov-20

artificial intelligence, biological therapy, disease activity, inflammatory bowel diseases, predictive model

General General

Classification of COVID-19 chest X-rays with deep learning: new models or fine tuning?

In Health information science and systems

Background and objectives : Chest X-ray data have been found to be very promising for assessing COVID-19 patients, especially for resolving emergency-department and urgent-care-center overcapacity. Deep-learning (DL) methods in artificial intelligence (AI) play a dominant role as high-performance classifiers in the detection of the disease using chest X-rays. Given many new DL models have been being developed for this purpose, the objective of this study is to investigate the fine tuning of pretrained convolutional neural networks (CNNs) for the classification of COVID-19 using chest X-rays. If fine-tuned pre-trained CNNs can provide equivalent or better classification results than other more sophisticated CNNs, then the deployment of AI-based tools for detecting COVID-19 using chest X-ray data can be more rapid and cost-effective.

Methods : Three pretrained CNNs, which are AlexNet, GoogleNet, and SqueezeNet, were selected and fine-tuned without data augmentation to carry out 2-class and 3-class classification tasks using 3 public chest X-ray databases.

Results : In comparison with other recently developed DL models, the 3 pretrained CNNs achieved very high classification results in terms of accuracy, sensitivity, specificity, precision, F 1 score, and area under the receiver-operating-characteristic curve.

Conclusion : AlexNet, GoogleNet, and SqueezeNet require the least training time among pretrained DL models, but with suitable selection of training parameters, excellent classification results can be achieved without data augmentation by these networks. The findings contribute to the urgent need for harnessing the pandemic by facilitating the deployment of AI tools that are fully automated and readily available in the public domain for rapid implementation.

Pham Tuan D

2021-Dec

Artificial intelligence, COVID-19, Chest X-rays, Classification, Deep learning

General General

Two Stage Transformer Model for COVID-19 Fake News Detection and Fact Checking

ArXiv Preprint

The rapid advancement of technology in online communication via social media platforms has led to a prolific rise in the spread of misinformation and fake news. Fake news is especially rampant in the current COVID-19 pandemic, leading to people believing in false and potentially harmful claims and stories. Detecting fake news quickly can alleviate the spread of panic, chaos and potential health hazards. We developed a two stage automated pipeline for COVID-19 fake news detection using state of the art machine learning models for natural language processing. The first model leverages a novel fact checking algorithm that retrieves the most relevant facts concerning user claims about particular COVID-19 claims. The second model verifies the level of truth in the claim by computing the textual entailment between the claim and the true facts retrieved from a manually curated COVID-19 dataset. The dataset is based on a publicly available knowledge source consisting of more than 5000 COVID-19 false claims and verified explanations, a subset of which was internally annotated and cross-validated to train and evaluate our models. We evaluate a series of models based on classical text-based features to more contextual Transformer based models and observe that a model pipeline based on BERT and ALBERT for the two stages respectively yields the best results.

Rutvik Vijjali, Prathyush Potluri, Siddharth Kumar, Sundeep Teki

2020-11-26

General General

Two Stage Transformer Model for COVID-19 Fake News Detection and Fact Checking

ArXiv Preprint

The rapid advancement of technology in online communication via social media platforms has led to a prolific rise in the spread of misinformation and fake news. Fake news is especially rampant in the current COVID-19 pandemic, leading to people believing in false and potentially harmful claims and stories. Detecting fake news quickly can alleviate the spread of panic, chaos and potential health hazards. We developed a two stage automated pipeline for COVID-19 fake news detection using state of the art machine learning models for natural language processing. The first model leverages a novel fact checking algorithm that retrieves the most relevant facts concerning user claims about particular COVID-19 claims. The second model verifies the level of truth in the claim by computing the textual entailment between the claim and the true facts retrieved from a manually curated COVID-19 dataset. The dataset is based on a publicly available knowledge source consisting of more than 5000 COVID-19 false claims and verified explanations, a subset of which was internally annotated and cross-validated to train and evaluate our models. We evaluate a series of models based on classical text-based features to more contextual Transformer based models and observe that a model pipeline based on BERT and ALBERT for the two stages respectively yields the best results.

Rutvik Vijjali, Prathyush Potluri, Siddharth Kumar, Sundeep Teki

2020-11-26

General General

Risk-stratified and stepped models of care for back pain and osteoarthritis: are we heading towards a common model?

In Pain reports

The overall quality of care for musculoskeletal pain conditions is suboptimal, partly due to a considerable evidence-practice gap. In osteoarthritis and low back pain, structured models of care exist to help overcome that challenge. In osteoarthritis, focus is on stepped care models, where treatment decisions are guided by response to treatment, and increasingly comprehensive interventions are only offered to people with inadequate response to more simple care. In low back pain, the most widely known approach is based on risk stratification, where patients with higher predicted risk of poor outcome are offered more comprehensive care. For both conditions, the recommended interventions and models of care share many commonalities and there is no evidence that one model of care is more effective than the other. Limitations of existing models of care include a lack of integrated information on social factors, comorbid conditions, and previous treatment experience, and they do not support an interplay between health care, self-management, and community-based activities. Moving forwards, a common model across musculoskeletal conditions seems realistic, which points to an opportunity for reducing the complexity of implementation. We foresee this development will use big data sources and machine-learning methods to combine stepped and risk-stratified care and to integrate self-management support and patient-centred care to a greater extent in future models of care.

Kongsted Alice, Kent Peter, Quicke Jonathan G, Skou Søren T, Hill Jonathan C

Decision making, Low back pain, Models of care, Osteoarthritis, Risk-stratified care, Stepped care

General General

Classification of COVID-19 chest X-rays with deep learning: new models or fine tuning?

In Health information science and systems

Background and objectives : Chest X-ray data have been found to be very promising for assessing COVID-19 patients, especially for resolving emergency-department and urgent-care-center overcapacity. Deep-learning (DL) methods in artificial intelligence (AI) play a dominant role as high-performance classifiers in the detection of the disease using chest X-rays. Given many new DL models have been being developed for this purpose, the objective of this study is to investigate the fine tuning of pretrained convolutional neural networks (CNNs) for the classification of COVID-19 using chest X-rays. If fine-tuned pre-trained CNNs can provide equivalent or better classification results than other more sophisticated CNNs, then the deployment of AI-based tools for detecting COVID-19 using chest X-ray data can be more rapid and cost-effective.

Methods : Three pretrained CNNs, which are AlexNet, GoogleNet, and SqueezeNet, were selected and fine-tuned without data augmentation to carry out 2-class and 3-class classification tasks using 3 public chest X-ray databases.

Results : In comparison with other recently developed DL models, the 3 pretrained CNNs achieved very high classification results in terms of accuracy, sensitivity, specificity, precision, F 1 score, and area under the receiver-operating-characteristic curve.

Conclusion : AlexNet, GoogleNet, and SqueezeNet require the least training time among pretrained DL models, but with suitable selection of training parameters, excellent classification results can be achieved without data augmentation by these networks. The findings contribute to the urgent need for harnessing the pandemic by facilitating the deployment of AI tools that are fully automated and readily available in the public domain for rapid implementation.

Pham Tuan D

2021-Dec

Artificial intelligence, COVID-19, Chest X-rays, Classification, Deep learning

Public Health Public Health

A hybrid AI approach for supporting clinical diagnosis of attention deficit hyperactivity disorder (ADHD) in adults.

In Health information science and systems

Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder that includes symptoms such as inattentiveness, hyperactivity and impulsiveness. It is considered as an important public health issue and prevalence of, as well as demand for diagnosis, has increased as awareness of the disease grew over the past years. Supply of specialist medical experts has not kept pace with the increasing demand for assessment, both due to financial pressures on health systems and the difficulty to train new experts, resulting in growing waiting lists. Patients are not being treated quickly enough causing problems in other areas of health systems (e.g. increased GP visits, increased risk of self-harm and accidents) and more broadly (e.g. time off work, relationship problems). Advances in AI make it possible to support the clinical diagnosis of ADHD based on the analysis of relevant data. This paper reports on findings related to the mental health services of a specialist Trust within the UK's National Health Service (NHS). The analysis studied data of adult patients who underwent diagnosis over the past few years, and developed a hybrid approach, consisting of two different models: a machine learning model obtained by training on data of past cases; and a knowledge model capturing the expertise of medical experts through knowledge engineering. The resulting algorithm has an accuracy of 95% on data currently available, and is currently being tested in a clinical environment.

Tachmazidis Ilias, Chen Tianhua, Adamou Marios, Antoniou Grigoris

2021-Dec

ADHD diagnosis, AI in medicine, Decision making support, Knowledge model, Machine learning model

Cardiology Cardiology

Optical coherence tomography-based machine learning for predicting fractional flow reserve in intermediate coronary stenosis: a feasibility study.

In Scientific reports ; h5-index 158.0

Machine learning approaches using intravascular optical coherence tomography (OCT) to predict fractional flow reserve (FFR) have not been investigated. Both OCT and FFR data were obtained for left anterior descending artery lesions in 125 patients. Training and testing groups were partitioned in the ratio of 5:1. The OCT-based machine learning-FFR was derived for the testing group and compared with wire-based FFR in terms of ischemia diagnosis (FFR ≤ 0.8). The OCT-based machine learning-FFR showed good correlation (r = 0.853, P < 0.001) with the wire-based FFR. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of the OCT-based machine learning-FFR for the testing group were 100%, 92.9%, 87.5%, 100%, and 95.2%, respectively. The OCT-based machine learning-FFR can be used to simultaneously acquire information on both image and functional modalities using one procedure, suggesting that it may provide optimized treatments for intermediate coronary artery stenosis.

Cha Jung-Joon, Son Tran Dinh, Ha Jinyong, Kim Jung-Sun, Hong Sung-Jin, Ahn Chul-Min, Kim Byeong-Keuk, Ko Young-Guk, Choi Donghoon, Hong Myeong-Ki, Jang Yangsoo

2020-Nov-24

General General

Dirty engineering data-driven inverse prediction machine learning model.

In Scientific reports ; h5-index 158.0

Most data-driven machine learning (ML) approaches established in metallurgy research fields are focused on a build-up of reliable quantitative models that predict a material property from a given set of material conditions. In general, the input feature dimension (the number of material condition variables) is much higher than the output feature dimension (the number of material properties of concern). Rather than such a forward-prediction ML model, it is necessary to develop so-called inverse-design modeling, wherein required material conditions could be deduced from a set of desired material properties. Here we report a novel inverse design strategy that employs two independent approaches: a metaheuristics-assisted inverse reading of conventional forward ML models and an atypical inverse ML model based on a modified variational autoencoder. These two unprecedented approaches were successful and led to overlapped results, from which we pinpointed several novel thermo-mechanically controlled processed (TMCP) steel alloy candidates that were validated by a rule-based thermodynamic calculation tool (Thermo-Calc.). We also suggested a practical protocol to elucidate how to treat engineering data collected from industry, which is not prepared as independent and identically distributed (IID) random data.

Lee Jin-Woong, Park Woon Bae, Do Lee Byung, Kim Seonghwan, Goo Nam Hoon, Sohn Kee-Sun

2020-Nov-24

Cardiology Cardiology

Artificial intelligence algorithm for detecting myocardial infarction using six-lead electrocardiography.

In Scientific reports ; h5-index 158.0

Rapid diagnosis of myocardial infarction (MI) using electrocardiography (ECG) is the cornerstone of effective treatment and prevention of mortality; however, conventional interpretation methods has low reliability for detecting MI and is difficulty to apply to limb 6-lead ECG based life type or wearable devices. We developed and validated a deep learning-based artificial intelligence algorithm (DLA) for detecting MI using 6-lead ECG. A total of 412,461 ECGs were used to develop a variational autoencoder (VAE) that reconstructed precordial 6-lead ECG using limb 6-lead ECG. Data from 9536, 1301, and 1768 ECGs of adult patients who underwent coronary angiography within 24 h from each ECG were used for development, internal and external validation, respectively. During internal and external validation, the area under the receiver operating characteristic curves of the DLA with VAE using a 6-lead ECG were 0.880 and 0.854, respectively, and the performances were preserved by the territory of the coronary lesion. Our DLA successfully detected MI using a 12-lead ECG or a 6-lead ECG. The results indicate that MI could be detected not only with a conventional 12 lead ECG but also with a life type 6-lead ECG device that employs our DLA.

Cho Younghoon, Kwon Joon-Myoung, Kim Kyung-Hee, Medina-Inojosa Jose R, Jeon Ki-Hyun, Cho Soohyun, Lee Soo Youn, Park Jinsik, Oh Byung-Hee

2020-Nov-24

General General

Mapping wind erosion hazard with regression-based machine learning algorithms.

In Scientific reports ; h5-index 158.0

Land susceptibility to wind erosion hazard in Isfahan province, Iran, was mapped by testing 16 advanced regression-based machine learning methods: Robust linear regression (RLR), Cforest, Non-convex penalized quantile regression (NCPQR), Neural network with feature extraction (NNFE), Monotone multi-layer perception neural network (MMLPNN), Ridge regression (RR), Boosting generalized linear model (BGLM), Negative binomial generalized linear model (NBGLM), Boosting generalized additive model (BGAM), Spline generalized additive model (SGAM), Spike and slab regression (SSR), Stochastic gradient boosting (SGB), support vector machine (SVM), Relevance vector machine (RVM) and the Cubist and Adaptive network-based fuzzy inference system (ANFIS). Thirteen factors controlling wind erosion were mapped, and multicollinearity among these factors was quantified using the tolerance coefficient (TC) and variance inflation factor (VIF). Model performance was assessed by RMSE, MAE, MBE, and a Taylor diagram using both training and validation datasets. The result showed that five models (MMLPNN, SGAM, Cforest, BGAM and SGB) are capable of delivering a high prediction accuracy for land susceptibility to wind erosion hazard. DEM, precipitation, and vegetation (NDVI) are the most critical factors controlling wind erosion in the study area. Overall, regression-based machine learning models are efficient techniques for mapping land susceptibility to wind erosion hazards.

Gholami Hamid, Mohammadifar Aliakbar, Bui Dieu Tien, Collins Adrian L

2020-Nov-24

General General

Characterization and identification of lysine crotonylation sites based on machine learning method on both plant and mammalian.

In Scientific reports ; h5-index 158.0

Lysine crotonylation (Kcr) is a type of protein post-translational modification (PTM), which plays important roles in a variety of cellular regulation and processes. Several methods have been proposed for the identification of crotonylation. However, most of these methods can predict efficiently only on histone or non-histone protein. Therefore, this work aims to give a more balanced performance in different species, here plant (non-histone) and mammalian (histone) are involved. SVM (support vector machine) and RF (random forest) were employed in this study. According to the results of cross-validations, the RF classifier based on EGAAC attribute achieved the best predictive performance which performs competitively good as existed methods, meanwhile more robust when dealing with imbalanced datasets. Moreover, an independent test was carried out, which compared the performance of this study and existed methods based on the same features or the same classifier. The classifiers of SVM and RF could achieve best performances with 92% sensitivity, 88% specificity, 90% accuracy, and an MCC of 0.80 in the mammalian dataset, and 77% sensitivity, 83% specificity, 70% accuracy and 0.54 MCC in a relatively small dataset of mammalian and a large-scaled plant dataset respectively. Moreover, a cross-species independent testing was also carried out in this study, which has proved the species diversity in plant and mammalian.

Wang Rulan, Wang Zhuo, Wang Hongfei, Pang Yuxuan, Lee Tzong-Yi

2020-Nov-24

General General

On-the-fly closed-loop materials discovery via Bayesian active learning.

In Nature communications ; h5-index 260.0

Active learning-the field of machine learning (ML) dedicated to optimal experiment design-has played a part in science as far back as the 18th century when Laplace used it to guide his discovery of celestial mechanics. In this work, we focus a closed-loop, active learning-driven autonomous system on another major challenge, the discovery of advanced materials against the exceedingly complex synthesis-processes-structure-property landscape. We demonstrate an autonomous materials discovery methodology for functional inorganic compounds which allow scientists to fail smarter, learn faster, and spend less resources in their studies, while simultaneously improving trust in scientific results and machine learning tools. This robot science enables science-over-the-network, reducing the economic impact of scientists being physically separated from their labs. The real-time closed-loop, autonomous system for materials exploration and optimization (CAMEO) is implemented at the synchrotron beamline to accelerate the interconnected tasks of phase mapping and property optimization, with each cycle taking seconds to minutes. We also demonstrate an embodiment of human-machine interaction, where human-in-the-loop is called to play a contributing role within each cycle. This work has resulted in the discovery of a novel epitaxial nanocomposite phase-change memory material.

Kusne A Gilad, Yu Heshan, Wu Changming, Zhang Huairuo, Hattrick-Simpers Jason, DeCost Brian, Sarker Suchismita, Oses Corey, Toher Cormac, Curtarolo Stefano, Davydov Albert V, Agarwal Ritesh, Bendersky Leonid A, Li Mo, Mehta Apurva, Takeuchi Ichiro

2020-Nov-24

General General

The Auto-eFACE: Machine Learning-Enhanced Program Yields Automated Facial Palsy Assessment Tool.

In Plastic and reconstructive surgery ; h5-index 62.0

INTRODUCTION : Facial palsy assessment is non-standardized. Clinician-graded scales are limited by subjectivity and observer bias. Computer-generated facial position and movement analysis would be desirable, to both achieve conformity in facial palsy assessment and to study comparative effectiveness among different medical, surgical, and physical therapies. We compared facial palsy assessment using the clinician-scored eFACE scale, to measures generated automatically ("auto-eFACE") by a machine-learning derived facial analysis program, Emotrics.

METHODS : The MEEI Standard Facial Palsy Dataset was employed. 160 photographs underwent both eFACE evaluation and automated landmark tracking, using a recently trained algorithm. MATLAB code was written to generate a modified, automatically-generated auto-eFACE score. The eFACE scores and auto-eFACE scores were compared for normal, flaccidly paralyzed faces, and synkinetic faces.

RESULTS : Both eFACE and auto-eFACE scores demonstrated the expected difference between normal patients and those with facial palsy. The auto-eFACE scores revealed significantly lower scores than eFACE for normal faces (93.83 [SD 4.37] versus 100.00 [SD 1.58], p = .01). Review of photographs revealed minor facial asymmetries that clinicians had a tendency to disregard when performing eFACE grading on normal faces. The auto-eFACE reported better facial symmetry in patients with both flaccid paralysis (59.96 [SD 5.80]) and severe synkinesis (62.35 [SD 9.35]) than clinician-graded eFACE (52.20 [SD 3.39] and 54.22 [SD 5.35], respectively) (p = .080 and .080, respectively); this result trended toward significance.

CONCLUSION : Auto-eFACE scores are feasible to obtain through machine-learning derived algorithms that assign facial landmarks. The automated system predicted more facial landmark asymmetry in normal patients, and less landmark asymmetry in patients with severe synkinesis and complete flaccid paralysis, compared to clinician grading. The auto-eFACE facial analysis program is a quick and easy-to-use automated assessment tool that holds promise for the standardization of facial palsy outcome measures, and may eliminate the observer bias seen in clinician-graded scales.

Miller Matthew Q, Hadlock Tessa A, Fortier Emily, Guarin Diego L

2020-Nov-02

oncology Oncology

Identifying and understanding determinants of high healthcare costs for breast cancer: a quantile regression machine learning approach.

In BMC health services research

BACKGROUND : To identify and rank the importance of key determinants of high medical expenses among breast cancer patients and to understand the underlying effects of these determinants.

METHODS : The Oncology Care Model (OCM) developed by the Center for Medicare & Medicaid Innovation were used. The OCM data provided to Mount Sinai on 2938 breast-cancer episodes included both baseline periods and three performance periods between Jan 1, 2012 and Jan 1, 2018. We included 11 variables representing information on treatment, demography and socio-economics status, in addition to episode expenditures. OCM data were collected from participating practices and payers. We applied a principled variable selection algorithm using a flexible tree-based machine learning technique, Quantile Regression Forests.

RESULTS : We found that the use of chemotherapy drugs (versus hormonal therapy) and interval of days without chemotherapy predominantly affected medical expenses among high-cost breast cancer patients. The second-tier major determinants were comorbidities and age. Receipt of surgery or radiation, geographically adjusted relative cost and insurance type were also identified as important high-cost drivers. These factors had disproportionally larger effects upon the high-cost patients.

CONCLUSIONS : Data-driven machine learning methods provide insights into the underlying web of factors driving up the costs for breast cancer care management. Results from our study may help inform population health management initiatives and allow policymakers to develop tailored interventions to meet the needs of those high-cost patients and to avoid waste of scarce resource.

Hu Liangyuan, Li Lihua, Ji Jiayi, Sanderson Mark

2020-Nov-23

Cancer, Machine learning, Medical care costs, Quantile regression

General General

Deep Transfer Learning for COVID-19 Prediction: Case Study for Limited Data Problems.

In Current medical imaging

OBJECTIVE : Automatic prediction of COVID-19 using deep convolution neural networks based pre-trained transfer models and Chest X-ray images.

METHOD : This research employs the advantages of computer vision and medical image analysis to develop an automated model that has the clinical potential for early detection of the disease. Using Deep Learning models, the research aims at evaluating the effectiveness and accuracy of different convolutional neural networks models in the automatic diagnosis of COVID-19 from X-ray images as compared to diagnosis performed by experts in the medical community.

RESULT : Due to the fact that the dataset available for COVID-19 is still limited, the best model to use is the InceptionNetV3. Performance results show that the InceptionNetV3 model yielded the highest accuracy of 98.63% (with data augmentation) and 98.90% (without data augmentation) among the three models designed. However, as the dataset gets bigger, the Inception ResNetV2 and NASNetlarge will do a better job of classification. All the performed networks tend to over-fit when data augmentation is not used, this is due to the small amount of data used for training and validation.

CONCLUSION : A deep transfer learning is proposed to detecting the COVID-19 automatically from chest X-ray by training it with X-ray images gotten from both COVID-19 patients and people with normal chest Xrays. The study is aimed at helping doctors in making decisions in their clinical practice due its high performance and effectiveness, the study also gives an insight to how transfer learning was used to automatically detect the COVID-19.

Albahli Saleh, Albattah Waleed

2020-Nov-23

CNN, Deep transfer learning, X-ray, coronavirus, inceptionetv3, inceptionresnetv2

General General

Performance evaluation of machine learning-based infectious screening flags on the HORIBA Medical Yumizen H550 Haematology Analyzer for vivax malaria and dengue fever.

In Malaria journal ; h5-index 51.0

BACKGROUND : Automated detection of malaria and dengue infection has been actively researched for more than two decades. Although many improvements have been achieved, these solutions remain too expensive for most laboratories and clinics in developing countries. The low range HORIBA Medical Haematology Analyzer, Yumizen H550, now provides dedicated flags 'vivax malaria' and 'dengue fever' in routine blood testing, developed through machine learning methods, to be used as a screening tool for malaria and dengue fever in endemic areas. This study sought to evaluate the effectiveness of these flags under real clinical conditions.

METHODS : A total of 1420 samples were tested using the Yumizen H550 Haematology Analyzer, including 1339 samples from febrile patients among whom 202 were infected with malaria parasites (Plasmodium vivax only: 182, Plasmodium falciparum only: 18, both: 2), 210 were from febrile dengue infected patients, 3 were from afebrile dengue infected patients and 78 were samples from healthy controls, in an outpatient laboratory clinic in Mumbai, India. Microscopic examination was carried out as the confirmatory reference method for detection of malarial parasite, species identification and assessing parasitaemia based on different stages of parasite life cycle. Rapid diagnostic malarial antigen tests were used for additional confirmation. For dengue infection, NS1 antigen detection by ELISA was used as a diagnostic marker.

RESULTS : For the automated vivax malaria flag, the original manufacturer's cut off yielded a sensitivity and specificity of 65.2% and 98.9% respectively with the ROC AUC of 0.9. After optimization of cut-off value, flag performance improved to 72% for sensitivity and 97.9% specificity. Additionally it demonstrated a positive correlation with increasing levels of parasitaemia. For the automated dengue fever flag it yielded a ROC AUC of 0.82 with 79.3% sensitivity and 71.5% specificity.

CONCLUSIONS : The results demonstrate a possibility of the effective use of automated infectious flags for screening vivax malaria and dengue infection in a clinical setting.

Dharap Parag, Raimbault Sebastien

2020-Nov-23

General General

A Review of Piezoelectric and Magnetostrictive Biosensor Materials for Detection of COVID-19 and Other Viruses.

In Advanced materials (Deerfield Beach, Fla.)

The spread of the severe acute respiratory syndrome coronavirus has changed the lives of people around the world with a huge impact on economies and societies. The development of wearable sensors that can continuously monitor the environment for viruses may become an important research area. Here, the state of the art of research on biosensor materials for virus detection is reviewed. A general description of the principles for virus detection is included, along with a critique of the experimental work dedicated to various virus sensors, and a summary of their detection limitations. The piezoelectric sensors used for the detection of human papilloma, vaccinia, dengue, Ebola, influenza A, human immunodeficiency, and hepatitis B viruses are examined in the first section; then the second part deals with magnetostrictive sensors for the detection of bacterial spores, proteins, and classical swine fever. In addition, progress related to early detection of COVID-19 (coronavirus disease 2019) is discussed in the final section, where remaining challenges in the field are also identified. It is believed that this review will guide material researchers in their future work of developing smart biosensors, which can further improve detection sensitivity in monitoring currently known and future virus threats.

Narita Fumio, Wang Zhenjin, Kurita Hiroki, Li Zhen, Shi Yu, Jia Yu, Soutis Constantinos

2020-Nov-24

Internet of Things, artificial intelligence, biosensors, data analytics, detection properties, electromagneto-mechanical design, machine learning, piezoelectric/magnetostrictive materials, virus

General General

No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained Classification Problems

ArXiv Preprint

In real-world classification tasks, each class often comprises multiple finer-grained "subclasses." As the subclass labels are frequently unavailable, models trained using only the coarser-grained class labels often exhibit highly variable performance across different subclasses. This phenomenon, known as hidden stratification, has important consequences for models deployed in safety-critical applications such as medicine. We propose GEORGE, a method to both measure and mitigate hidden stratification even when subclass labels are unknown. We first observe that unlabeled subclasses are often separable in the feature space of deep models, and exploit this fact to estimate subclass labels for the training data via clustering techniques. We then use these approximate subclass labels as a form of noisy supervision in a distributionally robust optimization objective. We theoretically characterize the performance of GEORGE in terms of the worst-case generalization error across any subclass. We empirically validate GEORGE on a mix of real-world and benchmark image classification datasets, and show that our approach boosts worst-case subclass accuracy by up to 22 percentage points compared to standard training techniques, without requiring any information about the subclasses.

Nimit S. Sohoni, Jared A. Dunnmon, Geoffrey Angus, Albert Gu, Christopher Ré

2020-11-25

General General

A random forest based biomarker discovery and power analysis framework for diagnostics research.

In BMC medical genomics

BACKGROUND : Biomarker identification is one of the major and important goal of functional genomics and translational medicine studies. Large scale -omics data are increasingly being accumulated and can provide vital means for the identification of biomarkers for the early diagnosis of complex disease and/or for advanced patient/diseases stratification. These tasks are clearly interlinked, and it is essential that an unbiased and stable methodology is applied in order to address them. Although, recently, many, primarily machine learning based, biomarker identification approaches have been developed, the exploration of potential associations between biomarker identification and the design of future experiments remains a challenge.

METHODS : In this study, using both simulated and published experimentally derived datasets, we assessed the performance of several state-of-the-art Random Forest (RF) based decision approaches, namely the Boruta method, the permutation based feature selection without correction method, the permutation based feature selection with correction method, and the backward elimination based feature selection method. Moreover, we conducted a power analysis to estimate the number of samples required for potential future studies.

RESULTS : We present a number of different RF based stable feature selection methods and compare their performances using simulated, as well as published, experimentally derived, datasets. Across all of the scenarios considered, we found the Boruta method to be the most stable methodology, whilst the Permutation (Raw) approach offered the largest number of relevant features, when allowed to stabilise over a number of iterations. Finally, we developed and made available a web interface ( https://joelarkman.shinyapps.io/PowerTools/ ) to streamline power calculations thereby aiding the design of potential future studies within a translational medicine context.

CONCLUSIONS : We developed a RF-based biomarker discovery framework and provide a web interface for our framework, termed PowerTools, that caters the design of appropriate and cost-effective subsequent future omics study.

Acharjee Animesh, Larkman Joseph, Xu Yuanwei, Cardoso Victor Roth, Gkoutos Georgios V

2020-Nov-23

Biomarker, Feature selection, Power study, Random forest

Public Health Public Health

COVID-19 Pneumonia Accurately Detected on Chest Radiographs with Artificial Intelligence.

In Intelligence-based medicine

Purpose : To investigate the diagnostic performance of an Artificial Intelligence (AI) system for detection of COVID-19 in chest radiographs (CXR), and compare results to those of physicians working alone, or with AI support.

Materials and Methods : An AI system was fine-tuned to discriminate confirmed COVID-19 pneumonia, from other viral and bacterial pneumonia and non-pneumonia patients and used to review 302 CXR images from adult patients retrospectively sourced from nine different databases. Fifty-four physicians blind to diagnosis, were invited to interpret images under identical conditions in a test set, and randomly assigned either to receive or not receive support from the AI system. Comparisons were then made between diagnostic performance of physicians working with and without AI support. AI system performance was evaluated using the area under the receiver operating characteristic (AUROC), and sensitivity and specificity of physician performance compared to that of the AI system.

Results : Discrimination by the AI system of COVID-19 pneumonia showed an AUROC curve of 0.96 in the validation and 0.83 in the external test set, respectively. The AI system outperformed physicians in the AUROC overall (70% increase in sensitivity and 1% increase in specificity, p<0.0001). When working with AI support, physicians increased their diagnostic sensitivity from 47% to 61% (p<0.001), although specificity decreased from 79% to 75% (p=0.007).

Conclusions : Our results suggest interpreting chest radiographs (CXR) supported by AI, increases physician diagnostic sensitivity for COVID-19 detection. This approach involving a human-machine partnership may help expedite triaging efforts and improve resource allocation in the current crisis.

Dorr Francisco, Chaves Hernán, Serra María Mercedes, Ramirez Andrés, Costa Martín Elías, Seia Joaquín, Cejas Claudia, Castro Marcelo, Eyheremendy Eduardo, Slezak Diego Fernández, Farez Mauricio F

2020-Nov-19

AI, artificial intelligence, AUPR, area under the precision-recall, AUROC, area under the receiver operating characteristic, Artificial intelligence, COVID-19, CT, computed tomography, CXR, chest radiographs, Chest, DL, deep learning, Diagnostic performance, RT-PCR, real-time reverse transcriptase–polymerase chain reaction, Radiography

General General

Denmark's Participation in the Search Engine TREC COVID-19 Challenge: Lessons Learned about Searching for Precise Biomedical Scientific Information on COVID-19

ArXiv Preprint

This report describes the participation of two Danish universities, University of Copenhagen and Aalborg University, in the international search engine competition on COVID-19 (the 2020 TREC-COVID Challenge) organised by the U.S. National Institute of Standards and Technology (NIST) and its Text Retrieval Conference (TREC) division. The aim of the competition was to find the best search engine strategy for retrieving precise biomedical scientific information on COVID-19 from the largest, at that point in time, dataset of curated scientific literature on COVID-19 -- the COVID-19 Open Research Dataset (CORD-19). CORD-19 was the result of a call to action to the tech community by the U.S. White House in March 2020, and was shortly thereafter posted on Kaggle as an AI competition by the Allen Institute for AI, the Chan Zuckerberg Initiative, Georgetown University's Center for Security and Emerging Technology, Microsoft, and the National Library of Medicine at the US National Institutes of Health. CORD-19 contained over 200,000 scholarly articles (of which more than 100,000 were with full text) about COVID-19, SARS-CoV-2, and related coronaviruses, gathered from curated biomedical sources. The TREC-COVID challenge asked for the best way to (a) retrieve accurate and precise scientific information, in response to some queries formulated by biomedical experts, and (b) rank this information decreasingly by its relevance to the query. In this document, we describe the TREC-COVID competition setup, our participation to it, and our resulting reflections and lessons learned about the state-of-art technology when faced with the acute task of retrieving precise scientific information from a rapidly growing corpus of literature, in response to highly specialised queries, in the middle of a pandemic.

Lucas Chaves Lima, Casper Hansen, Christian Hansen, Dongsheng Wang, Maria Maistro, Birger Larsen, Jakob Grue Simonsen, Christina Lioma

2020-11-25

General General

Computational prediction of species-specific yeast DNA replication origin via iterative feature representation.

In Briefings in bioinformatics

Deoxyribonucleic acid replication is one of the most crucial tasks taking place in the cell, and it has to be precisely regulated. This process is initiated in the replication origins (ORIs), and thus it is essential to identify such sites for a deeper understanding of the cellular processes and functions related to the regulation of gene expression. Considering the important tasks performed by ORIs, several experimental and computational approaches have been developed in the prediction of such sites. However, existing computational predictors for ORIs have certain curbs, such as building only single-feature encoding models, limited systematic feature engineering efforts and failure to validate model robustness. Hence, we developed a novel species-specific yeast predictor called yORIpred that accurately identify ORIs in the yeast genomes. To develop yORIpred, we first constructed optimal 40 baseline models by exploring eight different sequence-based encodings and five different machine learning classifiers. Subsequently, the predicted probability of 40 models was considered as the novel feature vector and carried out iterative feature learning approach independently using five different classifiers. Our systematic analysis revealed that the feature representation learned by the support vector machine algorithm (yORIpred) could well discriminate the distribution characteristics between ORIs and non-ORIs when compared with the other four algorithms. Comprehensive benchmarking experiments showed that yORIpred achieved superior and stable performance when compared with the existing predictors on the same training datasets. Furthermore, independent evaluation showcased the best and accurate performance of yORIpred thus underscoring the significance of iterative feature representation. To facilitate the users in obtaining their desired results without undergoing any mathematical, statistical or computational hassles, we developed a web server for the yORIpred predictor, which is available at: http://thegleelab.org/yORIpred.

Manavalan Balachandran, Basith Shaherin, Shin Tae Hwan, Lee Gwang

2020-Nov-25

iterative feature representation, machine learning, replication origin, support vector machine

Surgery Surgery

A generic deep learning framework to classify thyroid and breast lesions in ultrasound images.

In Ultrasonics

Breast and thyroid cancers are the two common cancers to affect women worldwide. Ultrasonography (US) is a commonly used non-invasive imaging modality to detect breast and thyroid cancers, but its clinical diagnostic accuracy for these cancers is controversial. Both thyroid and breast cancers share some similar high frequency ultrasound characteristics such as taller-than-wide shape ratio, hypo-echogenicity, and ill-defined margins. This study aims to develop an automatic scheme for classifying thyroid and breast lesions in ultrasound images using deep convolutional neural networks (DCNN). In particular, we propose a generic DCNN architecture with transfer learning and the same architectural parameter settings to train models for thyroid and breast cancers (TNet and BNet) respectively, and test the viability of such a generic approach with ultrasound images collected from clinical practices. In addition, the potentials of the thyroid model in learning the common features and its performance of classifying both breast and thyroid lesions are investigated. A retrospective dataset of 719 thyroid and 672 breast images captured from US machines of different makes between October 2016 and December 2018 is used in this study. Test results show that both TNet and BNet built on the same DCNN architecture have achieved good classification results (86.5% average accuracy for TNet and 89% for BNet). Furthermore, we used TNet to classify breast lesions and the model achieves sensitivity of 86.6% and specificity of 87.1%, indicating its capability in learning features commonly shared by thyroid and breast lesions. We further tested the diagnostic performance of the TNet model against that of three radiologists. The area under curve (AUC) for thyroid nodule classification is 0.861 (95% CI: 0.792-0.929) for the TNet model and 0.757-0.854 (95% CI: 0.658-0.934) for the three radiologists. The AUC for breast cancer classification is 0.875 (95% CI: 0.804-0.947) for the TNet model and 0.698-0.777 (95% CI: 0.593-0.872) for the radiologists, indicating the model's potential in classifying both breast and thyroid cancers with a higher level of accuracy than that of radiologists.

Zhu Yi-Cheng, AlZoubi Alaa, Jassim Sabah, Jiang Quan, Zhang Yuan, Wang Yong-Bing, Ye Xian-De, DU Hongbo

2020-Nov-12

Breast cancer, Cancer recognition, Deep convolutional neural network, Thyroid cancer, Ultrasonography

Radiology Radiology

Deep learning-based classification of primary bone tumors on radiographs: A preliminary study.

In EBioMedicine

BACKGROUND : To develop a deep learning model to classify primary bone tumors from preoperative radiographs and compare performance with radiologists.

METHODS : A total of 1356 patients (2899 images) with histologically confirmed primary bone tumors and pre-operative radiographs were identified from five institutions' pathology databases. Manual cropping was performed by radiologists to label the lesions. Binary discriminatory capacity (benign versus not-benign and malignant versus not-malignant) and three-way classification (benign versus intermediate versus malignant) performance of our model were evaluated. The generalizability of our model was investigated on data from external test set. Final model performance was compared with interpretation from five radiologists of varying level of experience using the Permutations tests.

FINDINGS : For benign vs. not benign, model achieved area under curve (AUC) of 0•894 and 0•877 on cross-validation and external testing, respectively. For malignant vs. not malignant, model achieved AUC of 0•907 and 0•916 on cross-validation and external testing, respectively. For three-way classification, model achieved 72•1% accuracy vs. 74•6% and 72•1% for the two subspecialists on cross-validation (p = 0•03 and p = 0•52, respectively). On external testing, model achieved 73•4% accuracy vs. 69•3%, 73•4%, 73•1%, 67•9%, and 63•4% for the two subspecialists and three junior radiologists (p = 0•14, p = 0•89, p = 0•93, p = 0•02, p < 0•01 for radiologists 1-5, respectively).

INTERPRETATION : Deep learning can classify primary bone tumors using conventional radiographs in a multi-institutional dataset with similar accuracy compared to subspecialists, and better performance than junior radiologists.

FUNDING : The project described was supported by RSNA Research & Education Foundation, through grant number RSCH2004 to Harrison X. Bai.

He Yu, Pan Ian, Bao Bingting, Halsey Kasey, Chang Marcello, Liu Hui, Peng Shuping, Sebro Ronnie A, Guan Jing, Yi Thomas, Delworth Andrew T, Eweje Feyisope, States Lisa J, Zhang Paul J, Zhang Zishu, Wu Jing, Peng Xianjing, Bai Harrison X

2020-Nov-21

Convolutional neural network, Deep learning, Plain radiograph, Primary bone tumor

General General

Fermented food products in the era of globalization: tradition meets biotechnology innovations.

In Current opinion in biotechnology

Omics tools offer the opportunity to characterize and trace traditional and industrial fermented foods. Bioinformatics, through machine learning, and other advanced statistical approaches, are able to disentangle fermentation processes and to predict the evolution and metabolic outcomes of a food microbial ecosystem. By assembling microbial artificial consortia, the biotechnological advances will also be able to enhance the nutritional value and organoleptics characteristics of fermented food, preserving, at the same time, the potential of autochthonous microbial consortia and metabolic pathways, which are difficult to reproduce. Preserving the traditional methods contributes to protecting the hidden value of local biodiversity, and exploits its potential in industrial processes with the final aim of guaranteeing food security and safety, even in developing countries.

Galimberti Andrea, Bruno Antonia, Agostinetto Giulia, Casiraghi Maurizio, Guzzetti Lorenzo, Labra Massimo

2020-Nov-21

General General

Fusion based on attention mechanism and context constraint for multi-modal brain tumor segmentation.

In Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society

This paper presents a 3D brain tumor segmentation network from multi-sequence MRI datasets based on deep learning. We propose a three-stage network: generating constraints, fusion under constraints and final segmentation. In the first stage, an initial 3D U-Net segmentation network is introduced to produce an additional context constraint for each tumor region. Under the obtained constraint, multi-sequence MRI are then fused using an attention mechanism to achieve three single tumor region segmentations. Considering the location relationship of the tumor regions, a new loss function is introduced to deal with the multiple class segmentation problem. Finally, a second 3D U-Net network is applied to combine and refine the three single prediction results. In each stage, only 8 initial filters are used, allowing to decrease significantly the number of parameters to be estimated. We evaluated our method on BraTS 2017 dataset. The results are promising in terms of dice score, hausdorff distance, and the amount of memory required for training.

Zhou Tongxue, Canu Stéphane, Ruan Su

2020-Nov-07

Attention mechanism, Brain tumor segmentation, Context constraint, Fusion

Public Health Public Health

Androgen Signaling Regulates SARS-CoV-2 Receptor Levels and Is Associated with Severe COVID-19 Symptoms in Men.

In Cell stem cell

SARS-CoV-2 infection has led to a global health crisis, and yet our understanding of the disease and potential treatment options remains limited. The infection occurs through binding of the virus with angiotensin converting enzyme 2 (ACE2) on the cell membrane. Here, we established a screening strategy to identify drugs that reduce ACE2 levels in human embryonic stem cell (hESC)-derived cardiac cells and lung organoids. Target analysis of hit compounds revealed androgen signaling as a key modulator of ACE2 levels. Treatment with antiandrogenic drugs reduced ACE2 expression and protected hESC-derived lung organoids against SARS-CoV-2 infection. Finally, clinical data on COVID-19 patients demonstrated that prostate diseases, which are linked to elevated androgen, are significant risk factors and that genetic variants that increase androgen levels are associated with higher disease severity. These findings offer insights on the mechanism of disproportionate disease susceptibility in men and identify antiandrogenic drugs as candidate therapeutics for COVID-19.

Samuel Ryan M, Majd Homa, Richter Mikayla N, Ghazizadeh Zaniar, Zekavat Seyedeh Maryam, Navickas Albertas, Ramirez Jonathan T, Asgharian Hosseinali, Simoneau Camille R, Bonser Luke R, Koh Kyung Duk, Garcia-Knight Miguel, Tassetto Michel, Sunshine Sara, Farahvashi Sina, Kalantari Ali, Liu Wei, Andino Raul, Zhao Hongyu, Natarajan Pradeep, Erle David J, Ott Melanie, Goodarzi Hani, Fattahi Faranak

2020-Nov-17

5-alpha reductase inhibitors, ACE2 regulation, COVID-19 risk factors, COVID-19 sex bias, SARS-CoV-2 infection model, deep learning, drug re-purposing, hPSC-based disease modeling, high content screening, virtual drug screen

Public Health Public Health

Evaluation of incomplete maternal smoking data using machine learning algorithms: a study from the Medical Birth Registry of Norway.

In BMC pregnancy and childbirth ; h5-index 58.0

BACKGROUND : The Medical Birth Registry of Norway (MBRN) provides national coverage of all births. While retrieval of most of the information in the birth records is mandatory, mothers may refrain to provide information on her smoking status. The proportion of women with unknown smoking status varied greatly over time, between hospitals, and by demographic groups. We investigated if incomplete data on smoking in the MBRN may have contributed to a biased smoking prevalence.

METHODS : In a study population of all 904,982 viable and singleton births during 1999-2014, we investigated main predictor variables influencing the unknown smoking status of the mothers' using linear multivariable regression. Thereafter, we applied machine learning to predict annual smoking prevalence (95% CI) in the same group of unknown smoking status, assuming missing-not-at-random.

RESULTS : Overall, the proportion of women with unknown smoking status was 14.4%. Compared to the Nordic country region of origin, women from Europe outside the Nordic region had 15% (95% CI 12-17%) increased adjusted risk to have unknown smoking status. Correspondingly, the increased risks for women from Asia was 17% (95% CI 15-19%) and Africa 26% (95% CI 23-29%). The most important machine learning prediction variables regarding maternal smoking were education, ethnic background, marital status and birth weight. We estimated a change from the annual observed smoking prevalence among the women with known smoking status in the range of - 5.5 to 1.1% when combining observed and predicted smoking prevalence.

CONCLUSION : The predicted total smoking prevalence was only marginally modified compared to the observed prevalence in the group with known smoking status. This implies that MBRN-data may be trusted for health surveillance and research.

Grøtvedt Liv, Egeland Grace M, Kvalvik Liv Grimstvedt, Madsen Christian

2020-Nov-23

Birth weight, Education, Ethnic groups, Hospitals, Informed consent, Machine learning, Pregnancy, Smoking

General General

Application of deep learning techniques for detection of COVID-19 cases using chest X-ray images: A comprehensive study.

In Biomedical signal processing and control

The emergence of Coronavirus Disease 2019 (COVID-19) in early December 2019 has caused immense damage to health and global well-being. Currently, there are approximately five million confirmed cases and the novel virus is still spreading rapidly all over the world. Many hospitals across the globe are not yet equipped with an adequate amount of testing kits and the manual Reverse Transcription-Polymerase Chain Reaction (RT-PCR) test is time-consuming and troublesome. It is hence very important to design an automated and early diagnosis system which can provide fast decision and greatly reduce the diagnosis error. The chest X-ray images along with emerging Artificial Intelligence (AI) methodologies, in particular Deep Learning (DL) algorithms have recently become a worthy choice for early COVID-19 screening. This paper proposes a DL assisted automated method using X-ray images for early diagnosis of COVID-19 infection. We evaluate the effectiveness of eight pre-trained Convolutional Neural Network (CNN) models such as AlexNet, VGG-16, GoogleNet, MobileNet-V2, SqueezeNet, ResNet-34, ResNet-50 and Inception-V3 for classification of COVID-19 from normal cases. Also, comparative analyses have been made among these models by considering several important factors such as batch size, learning rate, number of epochs, and type of optimizers with an aim to find the best suited model. The models have been validated on publicly available chest X-ray images and the best performance is obtained by ResNet-34 with an accuracy of 98.33%. This study will be useful for researchers to think for the design of more effective CNN based models for early COVID-19 detection.

Nayak Soumya Ranjan, Nayak Deepak Ranjan, Sinha Utkarsh, Arora Vaibhav, Pachori Ram Bilas

2021-Feb

COVID-19, Chest X-ray, Convolutional Neural Networks, Optimization algorithms, SARS-CoV-2

Pathology Pathology

Simple statistical methods for unsupervised brain anomaly detection on MRI are competitive to deep learning methods

ArXiv Preprint

Statistical analysis of magnetic resonance imaging (MRI) can help radiologists to detect pathologies that are otherwise likely to be missed. Deep learning (DL) has shown promise in modeling complex spatial data for brain anomaly detection. However, DL models have major deficiencies: they need large amounts of high-quality training data, are difficult to design and train and are sensitive to subtle changes in scanning protocols and hardware. Here, we show that also simple statistical methods such as voxel-wise (baseline and covariance) models and a linear projection method using spatial patterns can achieve DL-equivalent (3D convolutional autoencoder) performance in unsupervised pathology detection. All methods were trained (N=395) and compared (N=44) on a novel, expert-curated multiparametric (8 sequences) head MRI dataset of healthy and pathological cases, respectively. We show that these simple methods can be more accurate in detecting small lesions and are considerably easier to train and comprehend. The methods were quantitatively compared using AUC and average precision and evaluated qualitatively on clinical use cases comprising brain atrophy, tumors (small metastases) and movement artefacts. Our results demonstrate that while DL methods may be useful, they should show a sufficiently large performance improvement over simpler methods to justify their usage. Thus, simple statistical methods should provide the baseline for benchmarks. Source code and trained models are available on GitHub (https://github.com/vsaase/simpleBAD).

Victor Saase, Holger Wenz, Thomas Ganslandt, Christoph Groden, Máté E. Maros

2020-11-25

General General

Deep Learning of Sequence Patterns for CCCTC-Binding Factor-Mediated Chromatin Loop Formation.

In Journal of computational biology : a journal of computational molecular cell biology

The three-dimensional (3D) organization of the human genome is of crucial importance for gene regulation, and the CCCTC-binding factor (CTCF) plays an important role in chromatin interactions. However, it is still unclear what sequence patterns in addition to CTCF motif pairs determine chromatin loop formation. To discover the underlying sequence patterns, we have developed a deep learning model, called DeepCTCFLoop, to predict whether a chromatin loop can be formed between a pair of convergent or tandem CTCF motifs using only the DNA sequences of the motifs and their flanking regions. Our results suggest that DeepCTCFLoop can accurately distinguish the CTCF motif pairs forming chromatin loops from the ones not forming loops. It significantly outperforms CTCF-MP, a machine learning model based on word2vec and boosted trees, when using DNA sequences only. Furthermore, we show that DNA motifs binding to several transcription factors, including ZNF384, ZNF263, ASCL1, SP1, and ZEB1, may constitute the complex sequence patterns for CTCF-mediated chromatin loop formation. DeepCTCFLoop has also been applied to disease-associated sequence variants to identify candidates that may disrupt chromatin loop formation. Therefore, our results provide useful information for understanding the mechanism of 3D genome organization and may also help annotate and prioritize the noncoding sequence variants associated with human diseases.

Kuang Shuzhen, Wang Liangjiang

2020-Nov-25

3D genome, CTCF, chromatin loops, deep learning, sequence motifs

General General

Human MicroRNA Target Prediction via Multi-Hypotheses Learning.

In Journal of computational biology : a journal of computational molecular cell biology

MicroRNAs are involved in many critical cellular activities through binding to their mRNA targets, for example, in cell proliferation, differentiation, death, growth control, and developmental timing. Prediction of microRNA targets can assist in efficient experimental investigations on the functional roles of these small noncoding RNAs. Their accurate prediction, however, remains a challenge due to the limited understanding of underlying processes in recognizing microRNA targets. In this article, we introduce an algorithm that aims at not only predicting microRNA targets accurately but also assisting in vivo experiments to understand the mechanisms of targeting. The algorithm learns a unique hypothesis for each possible mechanism of microRNA targeting. These hypotheses are utilized to build a superior target predictor and for biologically meaningful partitioning of the data set of microRNA-target duplexes. Experimentally verified features for recognizing targets that incorporated in the algorithm enable the establishment of hypotheses that can be correlated with target recognition mechanisms. Our results and analysis show that our algorithm outperforms state-of-the-art data-driven approaches such as deep learning models and machine learning algorithms and rule-based methods for instance miRanda and RNAhybrid. In addition, feature selection on the partitions, provided by our algorithm, confirms that the partitioning mechanism is closely related to biological mechanisms of microRNA targeting. The resulting data partitions can potentially be used for in vivo experiments to aid in the discovery of the targeting mechanisms.

Mohebbi Mohammad, Ding Liang, Malmberg Russell L, Cai Liming

2020-Nov-25

data partitioning, machine learning, microRNA, microRNA target prediction, multi-hypotheses learning

Surgery Surgery

Random forest analysis identifies change in serum creatinine and listing status as the most predictive variables of an outcome for young children on liver transplant waitlist.

In Pediatric transplantation ; h5-index 20.0

Young children listed for liver transplant have high waitlist mortality (WL), which is not fully predicted by the PELD score. SRTR database was queried for children < 2 years listed for initial LT during 2002-17 (n = 4973). Subjects were divided into three outcome groups: bad (death or removal for too sick to transplant), good (spontaneous improvement), and transplant. Demographic, clinical, listing history, and laboratory variables at the time of listing (baseline variables), and changes in variables between listing and prior to outcome (trajectory variables) were analyzed using random forest (RF) analysis. 81.5% candidates underwent LT, and 12.3% had bad outcome. RF model including both baseline and trajectory variables improved prediction compared to model using baseline variables alone. RF analyses identified change in serum creatinine and listing status as the most predictive variables. 80% of subjects listed with a PELD score at time of listing and outcome underwent LT, while ~70% of subjects in both bad and good outcome groups were listed with either Status 1 (A or B) prior to an outcome, regardless of initial listing status. Increase in creatinine on LT waitlist was predictive of bad outcome. Longer time spent on WL was predictive of good outcome. Subjects with biliary atresia, liver tumors, and metabolic disease had LT rate >85%, while >20% of subjects with acute liver failure had a bad outcome. Change in creatinine, listing status, need for RRT, time spent on LT waitlist, and diagnoses were the most predictive variables.

Kulkarni Sakil, Chi Lisa, Goss Charles, Lian Qinghua, Nadler Michelle, Stoll Janis, Doyle Maria, Turmelle Yumirle, Khan Adeel

2020-Nov-24

infant, liver transplant, machine learning, outcome, pediatric, random forest analysis, waitlist

Radiology Radiology

Deep-learning algorithms for the interpretation of chest radiographs to aid in the triage of COVID-19 patients: A multicenter retrospective study.

In PloS one ; h5-index 176.0

The recent medical applications of deep-learning (DL) algorithms have demonstrated their clinical efficacy in improving speed and accuracy of image interpretation. If the DL algorithm achieves a performance equivalent to that achieved by physicians in chest radiography (CR) diagnoses with Coronavirus disease 2019 (COVID-19) pneumonia, the automatic interpretation of the CR with DL algorithms can significantly reduce the burden on clinicians and radiologists in sudden surges of suspected COVID-19 patients. The aim of this study was to evaluate the efficacy of the DL algorithm for detecting COVID-19 pneumonia on CR compared with formal radiology reports. This is a retrospective study of adult patients that were diagnosed as positive COVID-19 cases based on the reverse transcription polymerase chain reaction among all the patients who were admitted to five emergency departments and one community treatment center in Korea from February 18, 2020 to May 1, 2020. The CR images were evaluated with a publicly available DL algorithm. For reference, CR images without chest computed tomography (CT) scans classified as positive for COVID-19 pneumonia were used given that the radiologist identified ground-glass opacity, consolidation, or other infiltration in retrospectively reviewed CR images. Patients with evidence of pneumonia on chest CT scans were also classified as COVID-19 pneumonia positive outcomes. The overall sensitivity and specificity of the DL algorithm for detecting COVID-19 pneumonia on CR were 95.6%, and 88.7%, respectively. The area under the curve value of the DL algorithm for the detection of COVID-19 with pneumonia was 0.921. The DL algorithm demonstrated a satisfactory diagnostic performance comparable with that of formal radiology reports in the CR-based diagnosis of pneumonia in COVID-19 patients. The DL algorithm may offer fast and reliable examinations that can facilitate patient screening and isolation decisions, which can reduce the medical staff workload during COVID-19 pandemic situations.

Jang Se Bum, Lee Suk Hee, Lee Dong Eun, Park Sin-Youl, Kim Jong Kun, Cho Jae Wan, Cho Jaekyung, Kim Ki Beom, Park Byunggeon, Park Jongmin, Lim Jae-Kwang

2020

General General

Class Incremental Learning With Few-Shots Based on Linear Programming for Hyperspectral Image Classification.

In IEEE transactions on cybernetics

Hyperspectral imaging (HSI) classification has drawn tremendous attention in the field of Earth observation. In the big data era, explosive growth has occurred in the amount of data obtained by advanced remote sensors. Inevitably, new data classes and refined categories appear continuously, and such data are limited in terms of the timeliness of application. These characteristics motivate us to build an HSI classification model that learns new classifying capability rapidly within a few shots while maintaining good performance on the original classes. To achieve this goal, we propose a linear programming incremental learning classifier (LPILC) that can enable existing deep learning classification models to adapt to new datasets. Specifically, the LPILC learns the new ability by taking advantage of the well-trained classification model within one shot of the new class without any original class data. The entire process requires minimal new class data, computational resources, and time, thereby making LPILC a suitable tool for some time-sensitive applications. Moreover, we utilize the proposed LPILC to implement fine-grained classification via the well-trained original coarse-grained classification model. We demonstrate the success of LPILC with extensive experiments based on three widely used hyperspectral datasets, namely, PaviaU, Indian Pines, and Salinas. The experimental results reveal that the proposed LPILC outperforms state-of-the-art methods under the same data access and computational resource. The LPILC can be integrated into any sophisticated classification model, thereby bringing new insights into incremental learning applied in HSI classification.

Bai Jing, Yuan Anran, Xiao Zhu, Zhou Huaji, Wang Dingchen, Jiang Hongbo, Jiao Licheng

2020-Nov-24

Pathology Pathology

A Deep Learning Approach for Colonoscopy Pathology WSI Analysis: Accurate Segmentation and Classification.

In IEEE journal of biomedical and health informatics

Colorectal cancer (CRC) is one of the most life-threatening malignancies. Colonoscopy pathology examination can identify cells of early-stage colon tumors in small tissue image slices. But, such examination is time-consuming and exhausting on high resolution images. In this paper, we present a new framework for colonoscopy pathology whole slide image (WSI) analysis, including lesion segmentation and tissue diagnosis. Our framework contains an improved U-shape network with a VGG net as backbone, and two schemes for training and inference, respectively (the training scheme and inference scheme). Based on the characteristics of colonoscopy pathology WSI, we introduce a specific sampling strategy for sample selection and a transfer learning strategy for model training in our training scheme. Besides, we propose a specific loss function, class-wise DSC loss, to train the segmentation network. In our inference scheme, we apply a sliding-window based sampling strategy for patch generation and diploid ensemble (data ensemble and model ensemble) for the final prediction. We use the predicted segmentation mask to generate the classification probability for the likelihood of WSI being malignant. To our best knowledge, DigestPath 2019 is the first challenge and the first public dataset available on colonoscopy tissue screening and segmentation, and our proposed framework yields good performance on this dataset. Our new framework achieved a DSC of 0.7789 and AUC of 1 on the online test dataset, and we won the 2nd place in the DigestPath 2019 Challenge (task 2). Our code is available at https://github.com/bhfs9999/colonoscopy_tissue_screen_and_segmentation.

Feng Ruiwei, Liu Xuechen, Chen Jintai, Chen Danny Z, Gao Honghao, Wu Jian

2020-Nov-24

General General

Deep Learning for Diabetes: A Systematic Review.

In IEEE journal of biomedical and health informatics

Diabetes is a chronic metabolic disorder that affects an estimated 463 million people worldwide. Aiming to improve the treatment of people with diabetes, digital health has been widely adopted in recent years and generated a huge amount of data that could be used for further management of this chronic disease. Taking advantage of this, approaches that use artificial intelligence and specifically deep learning, an emerging type of machine learning, have been widely adopted with promising results. In this paper, we present a comprehensive review of the applications of deep learning within the field of diabetes. We conducted a systematic literature search and identified three main areas that use this approach: diagnosis of diabetes, glucose management, and diagnosis of diabetes-related complications. The search resulted in the selection of 40 original research articles, of which we have summarized the key information about the employed learning models, development process, main outcomes, and baseline methods for performance evaluation. Among the analyzed literature, it is to be noted that various deep learning techniques and frameworks have achieved state-of-the-art performance in many diabetes-related tasks by outperforming conventional machine learning approaches. Meanwhile, we identify some limitations in the current literature, such as a lack of data availability and model interpretability. The rapid developments in deep learning and the increase in available data offer the possibility to meet these challenges in the near future and allow the widespread deployment of this technology in clinical settings.

Zhu Taiyu, Li Kezhi, Herrero Pau, Georgiou Pantelis

2020-Nov-24

General General

Variation-Aware Federated Learning with Multi-Source Decentralized Medical Image Data.

In IEEE journal of biomedical and health informatics

Privacy concerns make it infeasible to construct a large medical image dataset by fusing small ones from different sources/institutions. Therefore, federated learning (FL) becomes a promising technique to learn from multi-source decentralized data with privacy preservation. However, the cross-client variation problem in medical image data would be the bottleneck in practice. In this paper, we, for the first time, propose a variation-aware federated learning (VAFL) framework, where the variations among clients are minimized by transforming the images of all clients onto a common image space. We first select one client with the lowest data complexity to define the target image space and synthesize a collection of images based on the client's raw images. Then, a subset of those synthesized images, which effectively capture the characteristics of the raw images and are sufficiently distinct from any raw image, are carefully selected for sharing. For each client, a modified CycleGAN is applied to translate its raw images to the target image space defined by the shared synthesized images. In this way, the cross-client variation problem is addressed with privacy preservation. We apply the framework for automated classification of clinically significant prostate cancer and evaluate it using multi-source decentralized apparent diffusion coefficient (ADC) image data. Experimental results demonstrate that the proposed VAFL framework stably outperforms the current horizontal FL framework. In addition, we discuss the conditions, and experimentally validated them, that VAFL is applicable for training a global model among multiple clients instead of directly training deep learning models locally on each client. Checking the satisfiability of such conditions can be used as guidance in determining if VAFL or FL should be employed for multi-source decentralized medical image data.

Yan Zengqiang, Wicaksana Jeffry, Wang Zhiwei, Yang Xin, Cheng Kwang-Ting

2020-Nov-24

General General

Heavy-Tailed Self-Similarity Modeling for Single Image Super Resolution.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

Self-similarity is a prominent characteristic of natural images that can play a major role when it comes to their denoising, restoration or compression. In this paper, we propose a novel probabilistic model that is based on the concept of image patch similarity and applied to the problem of Single Image Super Resolution. Based on this model, we derive a Variational Bayes algorithm, which super resolves low-resolution images , where the assumed distribution for the quantified similarity between two image patches is heavy-tailed . Moreover, we prove mathematically that the proposed algorithm is both an extended and superior version of the probabilistic Non-Local Means (NLM). Its prime advantage remains though, which is that it requires no training. A comparison of the proposed approach with state-of-the-art methods, using various quantitative metrics shows that it is almost on par, for images depicting rural themes and in terms of the Structural Similarity Index (SSIM) with the best performing methods that rely on trained deep learning models. On the other hand, it is clearly inferior to them, for urban themed images and in terms of all metrics, especially for the Mean-Squared-Error (MSE). In addition, qualitative evaluation of the proposed approach is performed using the Perceptual Index metric, which has been introduced to better mimic the human perception of the image quality. This evaluation favors our approach when compared to the best performing method that requires no training, even if they perform equally in qualitative terms, reinforcing the argument that MSE is not always an accurate metric for image quality.

Chantas Giannis, Nikolopoulos Spiros, Kompatsiaris Ioannis

2020-Nov-24

General General

A Pairwise Attentive Adversarial Spatiotemporal Network for Cross-domain Few-shot Action Recognition-R2.

In IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

-Action recognition is a popular research topic in the computer vision and machine learning domains. Although many action recognition methods have been proposed, only a few researchers have focused on cross-domain few-shot action recognition, which must often be performed in real security surveillance. Since the problems of action recognition, domain adaptation, and few-shot learning need to be simultaneously solved, the cross-domain few-shot action recognition task is a challenging problem. To solve these issues, in this work, we develop a novel end-to-end pairwise attentive adversarial spatiotemporal network (PASTN) to perform the cross-domain few-shot action recognition task, in which spatiotemporal information acquisition, few-shot learning, and video domain adaptation are realised in a unified framework. Specifically, the Resnet-50 network is selected as the backbone of the PASTN, and a 3D convolution block is embedded in the top layer of the 2D CNN (ResNet-50) to capture the spatiotemporal representations. Moreover, a novel attentive adversarial network architecture is designed to align the spatiotemporal dynamics actions with higher domain discrepancies. In addition, the pairwise margin discrimination loss is designed for the pairwise network architecture to improve the discrimination of the learned domain-invariant spatiotemporal feature. The results of extensive experiments performed on three public benchmarks of the cross-domain action recognition datasets, including SDAI Action I, SDAI Action II and UCF50-OlympicSport, demonstrate that the proposed PASTN can significantly outperform the state-of-the-art cross-domain action recognition methods in terms of both the accuracy and computational time. Even when only two labelled training samples per category are considered in the office1 scenario of the SDAI Action I dataset, the accuracy of the PASTN is improved by 6.1%, 10.9%, 16.8%, and 14% compared to that of the TA3N, TemporalPooling, I3D, and P3D methods, respectivelyManuscript received Dec 29th, 2019; This work was supported in part by the National Natural Science Foundation of China (No.61872270, No.62020106004). Young creative team in universities of Shandong Province (No.2020KJN012), Jinan 20 projects in universities (No.2018GXRC014). Tianjin New Generation Artificial Intelligence Major Program (No.18ZXZNGX00150, No.19ZXZNGX00110). Z. Gao is with Shandong Artificial Intelligence Institute, Qilu University of Technology (Shandong Academy of Sciences), Jinan, 250014, P.R China. He is also with Shandong Computer Science Center (National Supercomputer Center in Jinan), Jinan, 250014, P.R China. L.M Guo (Corresponding Author) and Shengyong Chen are with Key Laboratory of Computer Vision and System, Ministry of Education, Tianjin University of Technology, Tianjin, 300384, China. Weili Guan is with the Faculty of Information Technology, Monash University Clayton Campus, Australia. A.A. Liu is with the School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China. T.W Ren is with the State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China, 210093.

Gao Zan, Guo Leming, Guan Weili, Liu Anan, Ren Tongwei, Chen Shengyong

2020-Nov-24

Radiology Radiology

SPectroscOpic prediction of bRain Tumours (SPORT): study protocol of a prospective imaging trial.

In BMC medical imaging

BACKGROUND : The revised 2016 WHO-Classification of CNS-tumours now integrates molecular information of glial brain tumours for accurate diagnosis as well as for the development of targeted therapies. In this prospective study, our aim is to investigate the predictive value of MR-spectroscopy in order to establish a solid preoperative molecular stratification algorithm of these tumours. We will process a 1H MR-spectroscopy sequence within a radiomics analytics pipeline.

METHODS : Patients treated at our institution with WHO-Grade II, III and IV gliomas will receive preoperative anatomical (T2- and T1-weighted imaging with and without contrast enhancement) and proton MR spectroscopy (MRS) by using chemical shift imaging (MRS) (5 × 5 × 15 mm3 voxel size). Tumour regions will be segmented and co-registered to corresponding spectroscopic voxels. Raw signals will be processed by a deep-learning approach for identifying patterns in metabolic data that provides information with respect to the histological diagnosis as well patient characteristics obtained and genomic data such as target sequencing and transcriptional data.

DISCUSSION : By imaging the metabolic profile of a glioma using a customized chemical shift 1H MR spectroscopy sequence and by processing the metabolic profiles with a machine learning tool we intend to non-invasively uncover the genetic signature of gliomas. This work-up will support surgical and oncological decisions to improve personalized tumour treatment.

TRIAL REGISTRATION : This study was initially registered under another name and was later retrospectively registered under the current name at the German Clinical Trials Register (DRKS) under DRKS00019855.

Franco Pamela, Würtemberger Urs, Dacca Karam, Hübschle Irene, Beck Jürgen, Schnell Oliver, Mader Irina, Binder Harald, Urbach Horst, Heiland Dieter Henrik

2020-Nov-23

1H-MRS, Chemical chift imaging, MR spectroscopy, MRI, MRS, Magnetic resonance spectroscopy, Neuroradiology, Neurosurgery, Radiogenomics

General General

CNN-based transfer learning-BiLSTM network: A novel approach for COVID-19 infection detection.

In Applied soft computing

Coronavirus disease 2019 (COVID-2019), which emerged in Wuhan, China in 2019 and has spread rapidly all over the world since the beginning of 2020, has infected millions of people and caused many deaths. For this pandemic, which is still in effect, mobilization has started all over the world, and various restrictions and precautions have been taken to prevent the spread of this disease. In addition, infected people must be identified in order to control the infection. However, due to the inadequate number of Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests, Chest computed tomography (CT) becomes a popular tool to assist the diagnosis of COVID-19. In this study, two deep learning architectures have been proposed that automatically detect positive COVID-19 cases using Chest CT X-ray images. Lung segmentation (preprocessing) in CT images, which are given as input to these proposed architectures, is performed automatically with Artificial Neural Networks (ANN). Since both architectures contain AlexNet architecture, the recommended method is a transfer learning application. However, the second proposed architecture is a hybrid structure as it contains a Bidirectional Long Short-Term Memories (BiLSTM) layer, which also takes into account the temporal properties. While the COVID-19 classification accuracy of the first architecture is 98.14%, this value is 98.70% in the second hybrid architecture. The results prove that the proposed architecture shows outstanding success in infection detection and, therefore this study contributes to previous studies in terms of both deep architectural design and high classification success.

Aslan Muhammet Fatih, Unlersen Muhammed Fahri, Sabanci Kadir, Durdu Akif

2020-Nov-18

AlexNet, BiLSTM, COVID-19, Hybrid architecture, Transfer learning

Pathology Pathology

Quantifying Explainers of Graph Neural Networks in Computational Pathology

ArXiv Preprint

Explainability of deep learning methods is imperative to facilitate their clinical adoption in digital pathology. However, popular deep learning methods and explainability techniques (explainers) based on pixel-wise processing disregard biological entities' notion, thus complicating comprehension by pathologists. In this work, we address this by adopting biological entity-based graph processing and graph explainers enabling explanations accessible to pathologists. In this context, a major challenge becomes to discern meaningful explainers, particularly in a standardized and quantifiable fashion. To this end, we propose herein a set of novel quantitative metrics based on statistics of class separability using pathologically measurable concepts to characterize graph explainers. We employ the proposed metrics to evaluate three types of graph explainers, namely the layer-wise relevance propagation, gradient-based saliency, and graph pruning approaches, to explain Cell-Graph representations for Breast Cancer Subtyping. The proposed metrics are also applicable in other domains by using domain-specific intuitive concepts. We validate the qualitative and quantitative findings on the BRACS dataset, a large cohort of breast cancer RoIs, by expert pathologists.

Guillaume Jaume, Pushpak Pati, Behzad Bozorgtabar, Antonio Foncubierta-Rodríguez, Florinda Feroce, Anna Maria Anniciello, Tilman Rau, Jean-Philippe Thiran, Maria Gabrani, Orcun Goksel

2020-11-25

General General

Zero-Shot Video Object Segmentation with Co-Attention Siamese Networks.

In IEEE transactions on pattern analysis and machine intelligence ; h5-index 127.0

We introduce a novel network, called CO-attention Siamese Network (COSNet), to address the zero-shot video object segmentation task in a holistic fashion. We exploit the inherent correlation among video frames and incorporate a global co-attention mechanism to further improve the state-of-the-art deep learning based solutions that primarily focus on learning discriminative foreground representations over appearance and motion in short-term temporal segments. The co-attention layers in COSNet provide efficient and competent stages for capturing global correlations and scene context by jointly computing and appending co-attention responses into a joint feature space. COSNet is a unified and end-to-end trainable framework where different co-attention variants can be derived for capturing diverse properties of the learned joint feature space. We train COSNet with pairs (or groups) of video frames, and this naturally augments training data and allows increased learning capacity. During the segmentation stage, the co-attention model encodes useful information by processing multiple reference frames together, which is leveraged to infer the frequently reappearing and salient foreground objects better. Our extensive experiments over three large benchmarks demonstrate that COSNet outperforms the current alternatives by a large margin. Our algorithm implementations have been made publicly available at https://github.com/carrierlxk/COSNet.

Lu Xiankai, Wang Wenguan, Shen Jianbing, Crandall David, Luo Jiebo

2020-Nov-24

Surgery Surgery

Liquid biopsy in head and neck squamous cell carcinoma: circulating tumor cells, circulating tumor DNA, and exosomes.

In Expert review of molecular diagnostics

Introduction: Head and neck squamous cell carcinoma (HNSCC) is one of the most common cancers worldwide. Due to a lack of reliable markers, HNSCC patients are usually diagnosed at a late stage, which will lead to a worse outcome. Therefore, it's critical to improve the clinical management of cancer patients. Nowadays, the development of liquid biopsy enables a minimally invasive manner to extract molecular information from HNSCCs. Thus, this review aims to outline the clinical value of liquid biopsy in early detection, real-time monitoring, and prognostic evaluation of HNSCC. Areas covered: This comprehensive review focused on the characteristics as well as clinical applications of three liquid biopsy markers (CTCs, ctDNA, and exosomes) in HNSCC. What's more, it is promising to incorporate machine learning and 3D organoid models in the liquid biopsy of HNSCC. Expert opinion: Liquid biopsy provides a noninvasive technique to reflect the inter and intra-lesional heterogeneity through the detection of tumor cells or materials released from the primary and secondary tumors. Recently, some evolving technologies have the potential to combine with liquid biopsy to improve clinical management of HNSCC patients.

Yang Wen-Ying, Feng Lin-Fei, Meng Xiang, Chen Ran, Xu Wen-Hua, Zhang Lei, Xu Tao, Hou Jun

2020-Nov-24

Circulating tumor DNA, Circulating tumor cells, Clinical application, Exosomes, Head and neck squamous cell carcinoma, Liquid biopsy

General General

From Memristive Materials to Neural Networks.

In ACS applied materials & interfaces ; h5-index 147.0

The information technologies have been increasing exponentially following Moore's law over the past decades. This has fundamentally changed the ways of work and life. However, further improving data process efficiency is facing great challenges because of physical and architectural limitations. More powerful computational methodologies are crucial to fulfill the technology gap in the post-Moore's law period. The memristor exhibits promising prospects in information storage, high-performance computing, and artificial intelligence. Since the memristor was theoretically predicted by L. O. Chua in 1971 and experimentally confirmed by HP Laboratories in 2008, it has attracted great attention from worldwide researchers. The intrinsic properties of memristors, such as simple structure, low power consumption, compatibility with the complementary metal oxide-semiconductor (CMOS) process, and dual functionalities of the data storage and computation, demonstrate great prospects in many applications. In this review, we cover the memristor-relevant computing technologies, from basic materials to in-memory computing and future prospects. First, the materials and mechanisms in the memristor are discussed. Then, we present the development of the memristor in the domains of the synapse simulating, in-memory logic computing, deep neural networks (DNNs) and spiking neural networks (SNNs). Finally, the existent technology challenges and outlook of the state-of-art applications are proposed.

Guo Tao, Sun Bai, Ranjan Shubham, Jiao Yixuan, Wei Lan, Zhou Y Norman, Wu Yimin A

2020-Nov-24

in-memory logic computing, memristive materials, neural network, neuromorphic computing, synapse

Radiology Radiology

Deep Convolutional Encoder-Decoder algorithm for MRI brain reconstruction.

In Medical & biological engineering & computing ; h5-index 32.0

Compressed Sensing Magnetic Resonance Imaging (CS-MRI) could be considered a challenged task since it could be designed as an efficient technique for fast MRI acquisition which could be highly beneficial for several clinical routines. In fact, it could grant better scan quality by reducing motion artifacts amount as well as the contrast washout effect. It offers also the possibility to reduce the exploration cost and the patient's anxiety. Recently, Deep Learning Neuronal Network (DL) has been suggested in order to reconstruct MRI scans with conserving the structural details and improving parallel imaging-based fast MRI. In this paper, we propose Deep Convolutional Encoder-Decoder architecture for CS-MRI reconstruction. Such architecture bridges the gap between the non-learning techniques, using data from only one image, and approaches using large training data. The proposed approach is based on autoencoder architecture divided into two parts: an encoder and a decoder. The encoder as well as the decoder has essentially three convolutional blocks. The proposed architecture has been evaluated through two databases: Hammersmith dataset (for the normal scans) and MICCAI 2018 (for pathological MRI). Moreover, we extend our model to cope with noisy pathological MRI scans. The normalized mean square error (NMSE), the peak-to-noise ratio (PSNR), and the structural similarity index (SSIM) have been adopted as evaluation metrics in order to evaluate the proposed architecture performance and to make a comparative study with the state-of-the-art reconstruction algorithms. The higher PSNR and SSIM values as well as the lowest NMSE values could attest that the proposed architecture offers better reconstruction and preserves textural image details. Furthermore, the running time is about 0.8 s, which is suitable for real-time processing. Such results could encourage the neurologist to adopt it in their clinical routines. Graphical abstract.

Njeh Ines, Mzoughi Hiba, Ben Slima Mohamed, Ben Hamida Ahmed, Mhiri Chokri, Ben Mahfoudh Kheireddine

2020-Nov-24

Compressed Sensing Magnetic Resonance Imaging (CS-MRI), Convolutional Encoder-Decoder architecture, Deep Learning (DL), Fast MRI, Image reconstruction

General General

Update on benign paroxysmal positional vertigo.

In Journal of neurology

Benign paroxysmal positional vertigo (BPPV) is the most common cause of vertigo worldwide. This review considers recent advances in the diagnosis and management of BPPV including the use of web-based technology and artificial intelligence as well as the evidence supporting the use of vitamin D supplements for patients with BPPV and subnormal serum vitamin D.

Kim Hyo-Jung, Park JaeHan, Kim Ji-Soo

2020-Nov-24

Benign paroxysmal positional vertigo, Dizziness, Vertigo

Radiology Radiology

DeepCOVID-XR: An Artificial Intelligence Algorithm to Detect COVID-19 on Chest Radiographs Trained and Tested on a Large US Clinical Dataset.

In Radiology ; h5-index 91.0

Background There are characteristic findings of Coronavirus Disease 2019 (COVID-19) on chest imaging. An artificial intelligence (AI) algorithm to detect COVID-19 on chest radiographs might be useful for triage or infection control within a hospital setting, but prior reports have been limited by small datasets and/or poor data quality. Purpose To present DeepCOVID-XR, a deep learning AI algorithm for detecting COVID-19 on chest radiographs, trained and tested on a large clinical dataset. Materials and Methods DeepCOVID-XR is an ensemble of convolutional neural networks to detect COVID-19 on frontal chest radiographs using real-time polymerase chain reaction (RT-PCR) as a reference standard. The algorithm was trained and validated on 14,788 images (4,253 COVID-19 positive) from sites across the Northwestern Memorial Healthcare System from February 2020 to April 2020, then tested on 2,214 images (1,192 COVID-19 positive) from a single hold-out institution. Performance of the algorithm was compared with interpretations from 5 experienced thoracic radiologists on 300 random test images using the McNemar test for sensitivity/specificity and DeLong's test for the area under the receiver operating characteristic curve (AUC). Results A total of 5,853 patients (58±19 years, 3,101 women) were evaluated across datasets. On the entire test set, DeepCOVID-XR's accuracy was 83% with an AUC of 0.90. On 300 random test images (134 COVID-19 positive), DeepCOVID-XR's accuracy was 82% compared to individual radiologists (76%-81%) and the consensus of all 5 radiologists (81%). DeepCOVID-XR had a significantly higher sensitivity (71%) than 1 radiologist (60%, p<0.001) and higher specificity (92%) than 2 radiologists (75%, p<0.001; 84% p=0.009). DeepCOVID-XR's AUC was 0.88 compared to the consensus AUC of 0.85 (p=0.13 for comparison). Using the consensus interpretation as the reference standard, DeepCOVID-XR's AUC was 0.95 (0.92-0.98 95%CI). Conclusion DeepCOVID-XR, an AI algorithm, detected COVID-19 on chest radiographs with performance similar to a consensus of experienced thoracic radiologists. See also the editorial by van Ginneken.

Wehbe Ramsey M, Sheng Jiayue, Dutta Shinjan, Chai Siyuan, Dravid Amil, Barutcu Semih, Wu Yunan, Cantrell Donald R, Xiao Nicholas, Allen Bradley D, MacNealy Gregory A, Savas Hatice, Agrawal Rishi, Parekh Nishant, Katsaggelos Aggelos K

2020-Nov-24

Radiology Radiology

Population-Scale CT-based Body Composition Analysis of a Large Outpatient Population Using Deep Learning to Derive Age-, Sex-, and Race-specific Reference Curves.

In Radiology ; h5-index 91.0

Background Although CT-based body composition (BC) metrics may inform disease risk and outcomes, obtaining these metrics has been too resource intensive for large-scale use. Thus, population-wide distributions of BC remain uncertain. Purpose To demonstrate the validity of fully automated, deep learning BC analysis from abdominal CT examinations, to define demographically adjusted BC reference curves, and to illustrate the advantage of use of these curves compared with standard methods, along with their biologic significance in predicting survival. Materials and Methods After external validation and equivalency testing with manual segmentation, a fully automated deep learning BC analysis pipeline was applied to a cross-sectional population cohort that included any outpatient without a cardiovascular disease or cancer who underwent abdominal CT examination at one of three hospitals in 2012. Demographically adjusted population reference curves were generated for each BC area. The z scores derived from these curves were compared with sex-specific thresholds for sarcopenia by using χ2 tests and used to predict 2-year survival in multivariable Cox proportional hazards models that included weight and body mass index (BMI). Results External validation showed excellent correlation (R = 0.99) and equivalency (P < .001) of the fully automated deep learning BC analysis method with manual segmentation. With use of the fully automated BC data from 12 128 outpatients (mean age, 52 years; 6936 [57%] women), age-, race-, and sex-normalized BC reference curves were generated. All BC areas varied significantly with these variables (P < .001 except for subcutaneous fat area vs age [P = .003]). Sex-specific thresholds for sarcopenia demonstrated that age and race bias were not present if z scores derived from the reference curves were used (P < .001). Skeletal muscle area z scores were significantly predictive of 2-year survival (P = .04) in combined models that included BMI. Conclusion Fully automated body composition (BC) metrics vary significantly by age, race, and sex. The z scores derived from reference curves for BC parameters better capture the demographic distribution of BC compared with standard methods and can help predict survival. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Summers in this issue.

Magudia Kirti, Bridge Christopher P, Bay Camden P, Babic Ana, Fintelmann Florian J, Troschel Fabian M, Miskin Nityanand, Wrobel William C, Brais Lauren K, Andriole Katherine P, Wolpin Brian M, Rosenthal Michael H

2020-Nov-24

General General

Neuropsychological assessment could distinguish among different clinical phenotypes of progressive supranuclear palsy: A Machine Learning approach.

In Journal of neuropsychology

Progressive supranuclear palsy (PSP) is a rare, rapidly progressive neurodegenerative disease. Richardson's syndrome (PSP-RS) and predominant parkinsonism (PSP-P) are characterized by wide range of cognitive and behavioural disturbances, but these variants show similar cognitive pattern of alterations, leading difficult differential diagnosis. For this reason, we explored with an Artificial Intelligence approach, whether cognitive impairment could differentiate the phenotypes. Forty Parkinson's disease (PD) patients, 25 PSP-P, 40 PSP-RS, and 34 controls were enrolled following the consensus criteria diagnosis. Participants were evaluated with neuropsychological battery for cognitive domains. Random Forest models were used for exploring the discriminant power of the cognitive tests in distinguishing among the four groups. The classifiers for distinguishing diseases from controls reached high accuracies (86% for PD, 95% for PSP-P, 99% for PSP-RS). Regarding the differential diagnosis, PD was discriminated from PSP-P with 91% (important variables: HAMA, MMSE, JLO, RAVLT_I, BDI-II) and from PSP-RS with 92% (important variables: COWAT, JLO, FAB). PSP-P was distinguished from PSP-RS with 84% (important variables: JLO, WCFST, RAVLT_I, Digit span_F). This study revealed that PSP-P, PSP-RS and PD had peculiar cognitive deficits compared with healthy subjects, from which they were discriminated with optimal accuracies. Moreover, high accuracies were reached also in differential diagnosis. Most importantly, Machine Learning resulted to be useful to the clinical neuropsychologist in choosing the most appropriate neuropsychological tests for the cognitive evaluation of PSP patients.

Vaccaro Maria Grazia, Sarica Alessia, Quattrone Andrea, Chiriaco Carmelina, Salsone Maria, Morelli Maurizio, Quattrone Aldo

2020-Nov-24

cognitive profile, machine learning, neuropsychological, progressive supranuclear palsy, random forest

Surgery Surgery

Artificial Intelligence, Machine Learning and Calculation of Intraocular Lens Power.

In Klinische Monatsblatter fur Augenheilkunde

BACKGROUND AND PURPOSE : In the last decade, artificial intelligence and machine learning algorithms have been more and more established for the screening and detection of diseases and pathologies, as well as for describing interactions between measures where classical methods are too complex or fail. The purpose of this paper is to model the measured postoperative position of an intraocular lens implant after cataract surgery, based on preoperatively assessed biometric effect sizes using techniques of machine learning.

PATIENTS AND METHODS : In this study, we enrolled 249 eyes of patients who underwent elective cataract surgery at Augenklinik Castrop-Rauxel. Eyes were measured preoperatively with the IOLMaster 700 (Carl Zeiss Meditec), as well as preoperatively and postoperatively with the Casia 2 OCT (Tomey). Based on preoperative effect sizes axial length, corneal thickness, internal anterior chamber depth, thickness of the crystalline lens, mean corneal radius and corneal diameter a selection of 17 machine learning algorithms were tested for prediction performance for calculation of internal anterior chamber depth (AQD_post) and axial position of equatorial plane of the lens in the pseudophakic eye (LEQ_post).

RESULTS : The 17 machine learning algorithms (out of 4 families) varied in root mean squared/mean absolute prediction error between 0.187/0.139 mm and 0.255/0.204 mm (AQD_post) and 0.183/0.135 mm and 0.253/0.206 mm (LEQ_post), using 5-fold cross validation techniques. The Gaussian Process Regression Model using an exponential kernel showed the best performance in terms of root mean squared error for prediction of AQDpost and LEQpost. If the entire dataset is used (without splitting for training and validation data), comparison of a simple multivariate linear regression model vs. the algorithm with the best performance showed a root mean squared prediction error for AQD_post/LEQ_post with 0.188/0.187 mm vs. the best performance Gaussian Process Regression Model with 0.166/0.159 mm.

CONCLUSION : In this paper we wanted to show the principles of supervised machine learning applied to prediction of the measured physical postoperative axial position of the intraocular lenses. Based on our limited data pool and the algorithms used in our setting, the benefit of machine learning algorithms seems to be limited compared to a standard multivariate regression model.

Langenbucher Achim, Szentmáry Nóra, Wendelstein Jascha, Hoffmann Peter

2020-Nov-23

General General

Adeno-associated virus characterization for cargo discrimination through nanopore responsiveness.

In Nanoscale ; h5-index 139.0

Solid-state nanopore (SSN)-based analytical methods have found abundant use in genomics and proteomics with fledgling contributions to virology - a clinically critical field with emphasis on both infectious and designer-drug carriers. Here we demonstrate the ability of SSN to successfully discriminate adeno-associated viruses (AAVs) based on their genetic cargo [double-stranded DNA (AAVdsDNA), single-stranded DNA (AAVssDNA) or none (AAVempty)], devoid of digestion steps, through nanopore-induced electro-deformation (characterized by relative current change; ΔI/I0). The deformation order was found to be AAVempty > AAVssDNA > AAVdsDNA. A deep learning algorithm was developed by integrating support vector machine with an existing neural network, which successfully classified AAVs from SSN resistive-pulses (characteristic of genetic cargo) with >95% accuracy - a potential tool for clinical and biomedical applications. Subsequently, the presence of AAVempty in spiked AAVdsDNA was flagged using the ΔI/I0 distribution characteristics of the two types for mixtures composed of ∼75 : 25% and ∼40 : 60% (in concentration) AAVempty : AAVdsDNA.

Karawdeniya Buddini Iroshika, Bandara Y M Nuwan D Y, Khan Aminul Islam, Chen Wei Tong, Vu Hoang-Anh, Morshed Adnan, Suh Junghae, Dutta Prashanta, Kim Min Jun

2020-Nov-24

General General

Autistic traits are associated with the functional connectivity of between-but not within-attention systems in the general population.

In BMC neuroscience

BACKGROUND : Previous studies have demonstrated that individuals with autism spectrum disorder (ASD) exhibit dysfunction in the three attention systems (i.e., alerting, orienting, and executive control) as well as atypical relationships among these systems. Additionally, other studies have reported that individuals with subclinical but high levels of autistic traits show similar attentional tendencies to those observed in ASD. Based on these findings, it was hypothesized that autistic traits would affect the functions and relationships of the three attention systems in a general population. Resting-state functional magnetic resonance imaging (fMRI) was performed in 119 healthy adults to investigate relationships between autistic traits and within- and between-system functional connectivity (FC) among the three attention systems. Twenty-six regions of interest that were defined as components of the three attention systems by a previous task-based fMRI study were examined in terms of within- and between-system FC. We assessed autistic traits using the Autism-Spectrum Quotient.

RESULTS : Correlational analyses revealed that autistic traits were significantly correlated with between-system FC, but not with within-system FC.

CONCLUSIONS : Our results imply that a high autistic trait level, even when subclinical, is associated with the way the three attention systems interact.

Yoshimura Sayaka, Kobayashi Kei, Ueno Tsukasa, Miyagi Takashi, Oishi Naoya, Murai Toshiya, Fujiwara Hironobu

2020-Nov-23

Attention, Attention network, Autistic traits, Functional connectivity, Resting-state functional magnetic resonance imaging

General General

Impact of Covid19 on electricity load in Haryana (India).

In International journal of energy research

As it is known that the whole world is battling against the Corona Virus Disease or COVID19 and trying their level best to stop the spread of this pandemic. To avoid the spread, several countries like China, Italy, Spain, America took strict measures like nationwide lockdown or by cordoning off the areas that were suspected of having risks of community spread. Taking cues from the foreign counterparts, the government of India undertook an important decision of nationwide full lockdown on March 25th which was further extended till May 4th, 2020 (47 days-full lockdown). Looking at the current situation government of India pushed the lockdown further with eased curbs, divided the nation into green, orange and red zones, rapid testing of citizens in containment area, mandatory wearing of masks and following social distancing among others. The outbreak of the pandemic, has led to the large economic shock to the world which was never been experienced since decades. Moreover it brought a great uncertainty over the world wide electricity sector as to slow down the spread of the virus, many countries have issued restrictions, including the closure of malls, educational institutions, halting trains, suspending of flights, implemented partial or full lockdowns, insisted work from home to the employees. In this paper, the impact analysis of electricity consumption of state Haryana (India) is done using machine learning conventional algorithms and artificial neural network and electricity load forecasting is done for a week so as to aid the electricity board to know the consumption of the area pre hand and likewise can restrict the electricity production as per requirement. Thus, it will help power system to secure electricity supply and scheduling and reduce wastes since electricity is difficult to store. For this the dataset from regional electricity boards of Haryana that is, Dakshin Haryana Bijli Vitran Nigam and Uttar Haryana Bijli Vitran Nigam were analysed and electricity loads of state were predicted using python programming and as per result analysis it was observed that artificial neural network out performs conventional machine learning models.

Gulati Payal, Kumar Anil, Bhardwaj Raghav

2020-Oct-12

Pathology Pathology

Supercharging Imbalanced Data Learning With Causal Representation Transfer

ArXiv Preprint

Dealing with severe class imbalance poses a major challenge for real-world applications, especially when the accurate classification and generalization of minority classes is of primary interest. In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets. While existing solutions mostly appeal to sampling or weighting adjustments to alleviate the pathological imbalance, or imposing inductive bias to prioritize non-spurious associations, we take novel perspectives to promote sample efficiency and model generalization based on the invariance principles of causality. Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions. Such causal assumption enables efficient knowledge transfer from the dominant classes to their under-represented counterparts, even if the respective feature distributions show apparent disparities. This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes. Our development is orthogonal to the existing extreme classification techniques thus can be seamlessly integrated. The utility of our proposal is validated with an extensive set of synthetic and real-world computer vision tasks against SOTA solutions.

Junya Chen, Zidi Xiu, Benjamin Goldstein, Ricardo Henao, Lawrence Carin, Chenyang Tao

2020-11-25

General General

Deep Transfer Learning for COVID-19 Prediction: Case Study for Limited Data Problems.

In Current medical imaging

OBJECTIVE : Automatic prediction of COVID-19 using deep convolution neural networks based pre-trained transfer models and Chest X-ray images.

METHOD : This research employs the advantages of computer vision and medical image analysis to develop an automated model that has the clinical potential for early detection of the disease. Using Deep Learning models, the research aims at evaluating the effectiveness and accuracy of different convolutional neural networks models in the automatic diagnosis of COVID-19 from X-ray images as compared to diagnosis performed by experts in the medical community.

RESULT : Due to the fact that the dataset available for COVID-19 is still limited, the best model to use is the InceptionNetV3. Performance results show that the InceptionNetV3 model yielded the highest accuracy of 98.63% (with data augmentation) and 98.90% (without data augmentation) among the three models designed. However, as the dataset gets bigger, the Inception ResNetV2 and NASNetlarge will do a better job of classification. All the performed networks tend to over-fit when data augmentation is not used, this is due to the small amount of data used for training and validation.

CONCLUSION : A deep transfer learning is proposed to detecting the COVID-19 automatically from chest X-ray by training it with X-ray images gotten from both COVID-19 patients and people with normal chest Xrays. The study is aimed at helping doctors in making decisions in their clinical practice due its high performance and effectiveness, the study also gives an insight to how transfer learning was used to automatically detect the COVID-19.

Albahli Saleh, Albattah Waleed

2020-Nov-23

CNN, Deep transfer learning, X-ray, coronavirus, inceptionetv3, inceptionresnetv2

General General

Statistical Characterization of the Morphologies of Nanoparticles through Machine Learning Based Electron Microscopy Image Analysis.

In ACS nano ; h5-index 203.0

Although transmission electron microscopy (TEM) may be one of the most efficient techniques available for studying the morphological characteristics of nanoparticles, analyzing them quantitatively in a statistical manner is exceedingly difficult. Herein, we report a method for mass-throughput analysis of the morphologies of nanoparticles by applying a genetic algorithm to an image analysis technique. The proposed method enables the analysis of over 150,000 nanoparticles with a high precision of 99.75% and a low false discovery rate of 0.25%. Furthermore, we clustered nanoparticles with similar morphological shapes into several groups for diverse statistical analyses. We determined that at least 1,500 nanoparticles are necessary to represent the total population of nanoparticles at a 95% credible interval. In addition, the number of TEM measurements and the average number of nanoparticles in each TEM image should be considered to ensure a satisfactory representation of nanoparticles using TEM images. Moreover, the statistical distribution of polydisperse nanoparticles plays a key role in accurately estimating their optical properties. We expect this method to become a powerful tool and aid in expanding nanoparticle-related research into the statistical domain for use in big data analysis.

Lee Byoungsang, Yoon Seokyoung, Lee Jin Woong, Kim Yunchul, Chang Junhyuck, Yun Jaesub, Ro Jae Chul, Lee Jong-Seok, Lee Jung Heon

2020-Nov-24

big data, image analysis, machine learning, morphological properties, statistics, transmission electron microscope (TEM)

General General

A Review of Piezoelectric and Magnetostrictive Biosensor Materials for Detection of COVID-19 and Other Viruses.

In Advanced materials (Deerfield Beach, Fla.)

The spread of the severe acute respiratory syndrome coronavirus has changed the lives of people around the world with a huge impact on economies and societies. The development of wearable sensors that can continuously monitor the environment for viruses may become an important research area. Here, the state of the art of research on biosensor materials for virus detection is reviewed. A general description of the principles for virus detection is included, along with a critique of the experimental work dedicated to various virus sensors, and a summary of their detection limitatio