Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Machine learning in the electrocardiogram.

In Journal of electrocardiology

The electrocardiogram is the most widely used diagnostic tool that records the electrical activity of the heart and, therefore, its use for identifying markers for early diagnosis and detection is of paramount importance. In the last years, the huge increase of electronic health records containing a systematised collection of different type of digitalised medical data, together with new tools to analyse this large amount of data in an efficient way have re-emerged the field of machine learning in healthcare innovation. This review describes the most recent machine learning-based systems applied to the electrocardiogram as well as pros and cons in the use of these techniques. Machine learning, including deep learning, have shown to be powerful tools for aiding clinicians in patient screening and risk stratification tasks. However, they do not provide the physiological basis of classification outcomes. Computational modelling and simulation can help in the interpretation and understanding of key physiologically meaningful ECG biomarkers extracted from machine learning techniques.

Mincholé Ana, Camps Julià, Lyon Aurore, Rodríguez Blanca

2019-Aug-08

General General

Automatic monitoring system for individual dairy cows based on a deep learning framework that provides identification via body parts and estimation of body condition score.

In Journal of dairy science

Body condition score (BCS) is a common tool for indirectly estimating the mobilization of energy reserves in the fat and muscle of cattle that meets the requirements of animal welfare and precision livestock farming for the effective monitoring of individual animals. However, previous studies on automatic BCS systems have used manual scoring for data collection, and traditional image extraction methods have limited model performance accuracy. In addition, the radio frequency identification device system commonly used in ranching has the disadvantages of misreadings and damage to bovine bodies. Therefore, the aim of this research was to develop and validate an automatic system for identifying individuals and assessing BCS using a deep learning framework. This work developed a linear regression model of BCS using ultrasound backfat thickness to determine BCS for training sets and tested a system based on convolutional neural networks with 3 channels, including depth, gray, and phase congruency, to analyze the back images of 686 cows. After we performed an analysis of image model performance, online verification was used to evaluate the accuracy and precision of the system. The results showed that the selected linear regression model had a high coefficient of determination value (0.976), and the correlation coefficient between manual BCS and ultrasonic BCS was 0.94. Although the overall accuracy of the BCS estimations was high (0.45, 0.77, and 0.98 within 0, 0.25, and 0.5 unit, respectively), the validation for actual BCS ranging from 3.25 to 3.5 was weak (the F1 scores were only 0.6 and 0.57, respectively, within the 0.25-unit range). Overall, individual identification and BCS assessment performed well in the online measurement, with accuracies of 0.937 and 0.409, respectively. A system for individual identification and BCS assessment was developed, and a convolutional neural network using depth, gray, and phase congruency channels to interpret image features exhibited advantages for monitoring thin cows.

Yukun Sun, Pengju Huo, Yujie Wang, Ziqi Cui, Yang Li, Baisheng Dai, Runze Li, Yonggen Zhang

2019-Sep-11

backfat thickness, body condition score, convolutional neural network, individual identification

Radiology Radiology

Identifying pulmonary nodules or masses on chest radiography using deep learning: external validation and strategies to improve clinical practice.

In Clinical radiology

AIM : To test the diagnostic performance of a deep learning-based system for the detection of clinically significant pulmonary nodules/masses on chest radiographs.

MATERIALS AND METHODS : Using a retrospective study of 100 patients (47 with clinically significant pulmonary nodules/masses and 53 control subjects without pulmonary nodules), two radiologists verified clinically significantly pulmonary nodules/masses according to chest computed tomography (CT) findings. A computer-aided diagnosis (CAD) software using a deep-learning approach was used to detect pulmonary nodules/masses to determine the diagnostic performance in four algorithms (heat map, abnormal probability, nodule probability, and mass probability).

RESULTS : A total of 100 cases were included in the analysis. Among the four algorithms, mass algorithm could achieve a 76.6% sensitivity (36/47, 11 false negative) and 88.68% specificity (47/53, six false-positive) in the detection of pulmonary nodules/masses at the optimal probability score cut-off of 0.2884. Compared to the other three algorithms, mass probability algorithm had best predictive ability for pulmonary nodule/mass detection at the optimal probability score cut-off of 0.2884 (AUCMass: 0.916 versus AUCHeat map: 0.682, p<0.001; AUCMass: 0.916 versus AUCAbnormal: 0.810, p=0.002; AUCMass: 0.916 versus AUCNodule: 0.813, p=0.014).

CONCLUSION : In conclusion, the deep-learning based computer-aided diagnosis system will likely play a vital role in the early detection and diagnosis of pulmonary nodules/masses on chest radiographs. In future applications, these algorithms could support triage workflow via double reading to improve sensitivity and specificity during the diagnostic process.

Liang C-H, Liu Y-C, Wu M-T, Garcia-Castro F, Alberich-Bayarri A, Wu F-Z

2019-Sep-11

General General

Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.

In Artificial intelligence in medicine

OBJECTIVE : The neonatal period of a child is considered the most crucial phase of its physical development and future health. As per the World Health Organization, India has the highest number of pre-term births [1], with over 3.5 million babies born prematurely, and up to 40% of them are babies with low birth weights, highly prone to a multitude of diseases such as Jaundice, Sepsis, Apnea, and other Metabolic disorders. Apnea is the primary concern for caretakers of neonates in intensive care units. The real-time medical data is known to be noisy and nonlinear and to address the resultant complexity in classification and prediction of diseases; there is a need for optimizing learning models to maximize predictive performance. Our study attempts to optimize neural network architectures to predict the occurrence of apneic episodes in neonates, after the first week of admission to Neonatal Intensive Care Unit (NICU). The primary contribution of this study is the formulation and description of a set of generic steps involved in selecting various model-specific, training and hyper-parametric optimization algorithms, as well as model architectures for optimal predictive performance on complex and noisy medical datasets.

METHODS : The data used for the study being inherently complex and noisy, Kernel Principal Component Analysis (PCA) is used to reduce dataset dimensionality for the analysis such as interpretations and visualization of the dataset. Hyper-parametric and parametric optimization, in different categories, are considered, including learning rate updater algorithms, regularization methods, activation functions, gradient descent algorithms and depth of the network, based on their performance on the validation set, to obtain a holistically optimized neural network, that best model the given complex medical dataset. Deep Neural Network Architectures such as Deep Multilayer Perceptron's, Stacked Auto-encoders and Deep Belief Networks are employed to model the dataset, and their performance is compared to the optimized neural network obtained from the parametric exploration. Further, the results are compared with Support Vector Machine (SVM), K Nearest Neighbor, Decision Tree (DT) and Random Forest (RF) algorithms.

RESULTS : The results indicate that the optimized eight layer Multilayer Perceptron (MLP) model, with Adam Decay and Stochastic Gradient Descent (AUC 0.82) can outperform the conventional machine learning models, and perform comparably to the Deep Auto-encoder model (AUC 0.83) in predicting the presence of apnea in neonates.

CONCLUSION : The study shows that an MLP model can undergo significant improvements in predictive performance, by the proposed step-wise optimization. The optimized MLP is proved to be as accurate as deep neural network models such as Deep Belief Networks and Deep Auto-encoders for noisy and nonlinear data sets, and outperform all conventional models like Support Vector Machine (SVM), Decision Tree (DT), K Nearest Neighbor and Random Forest (RF) algorithms. The generic nature of the proposed step-wise optimization provides a framework to optimize neural networks on such complex nonlinear datasets. The investigated models can help neonatologists as a diagnostic tool.

Shirwaikar Rudresh Deepak, Acharya U Dinesh, Makkithaya Krishnamoorthi, M Surulivelrajan, Srivastava Shikhar, Lewis U Leslie Edward S

2019-Jul

Deep autoencoders, Deep belief networks, Deep network architectures, Multi-layer perceptron, Optimizing neural network

Radiology Radiology

Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks.

In Artificial intelligence in medicine

Manual annotation is considered to be the "gold standard" in medical imaging analysis. However, medical imaging datasets that include expert manual segmentation are scarce as this step is time-consuming, and therefore expensive. Moreover, single-rater manual annotation is most often used in data-driven approaches making the network biased to only that single expert. In this work, we propose a CNN for brain extraction in magnetic resonance (MR) imaging, that is fully trained with what we refer to as "silver standard" masks. Therefore, eliminating the cost associated with manual annotation. Silver standard masks are generated by forming the consensus from a set of eight, public, non-deep-learning-based brain extraction methods using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. Our method consists of (1) developing a dataset with "silver standard" masks as input, and implementing (2) a tri-planar method using parallel 2D U-Net-based convolutional neural networks (CNNs) (referred to as CONSNet). This term refers to our integrated approach, i.e., training with silver standard masks and using a 2D U-Net-based architecture. We conducted our analysis using three public datasets: the Calgary-Campinas-359 (CC-359), the LONI Probabilistic Brain Atlas (LPBA40), and the Open Access Series of Imaging Studies (OASIS). Five performance metrics were used in our experiments: Dice coefficient, sensitivity, specificity, Hausdorff distance, and symmetric surface-to-surface mean distance. Our results showed that we outperformed (i.e., larger Dice coefficients) the current state-of-the-art skull-stripping methods without using gold standard annotation for the CNNs training stage. CONSNet is the first deep learning approach that is fully trained using silver standard data and is, thus, more generalizable. Using these masks, we eliminate the cost of manual annotation, decreased inter-/intra-rater variability, and avoided CNN segmentation overfitting towards one specific manual annotation guideline that can occur when gold standard masks are used. Moreover, once trained, our method takes few seconds to process a typical brain image volume using modern a high-end GPU. In contrast, many of the other competitive methods have processing times in the order of minutes.

Lucena Oeslle, Souza Roberto, Rittner Letícia, Frayne Richard, Lotufo Roberto

2019-Jul

Convolutional neural network (CNN), Data augmentation, Silver standard masks, Skull-stripping

General General

Deep multiphysics: Coupling discrete multiphysics with machine learning to attain self-learning in-silico models replicating human physiology.

In Artificial intelligence in medicine

OBJECTIVES : The objective of this study is to devise a modelling strategy for attaining in-silico models replicating human physiology and, in particular, the activity of the autonomic nervous system.

METHOD : Discrete Multiphysics (a multiphysics modelling technique) and Reinforcement Learning (a Machine Learning algorithm) are combined to achieve an in-silico model with the ability of self-learning and replicating feedback loops occurring in human physiology. Computational particles, used in Discrete Multiphysics to model biological systems, are associated to (computational) neurons: Reinforcement Learning trains these neurons to behave like they would in real biological systems.

RESULTS : As benchmark/validation, we use the case of peristalsis in the oesophagus. Results show that the in-silico model effectively learns by itself how to propel the bolus in the oesophagus.

CONCLUSIONS : The combination of first principles modelling (e.g. multiphysics) and machine learning (e.g. Reinforcement Learning) represents a new powerful tool for in-silico modelling of human physiology. Biological feedback loops occurring, for instance, in peristaltic or metachronal motion, which until now could not be accounted for in in-silico models, can be tackled by the proposed technique.

Alexiadis Alessio

2019-Jul

Coupling first-principles models with machine learning, Discrete multiphysics, Particle-based computational methods, Reinforcement Learning