Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Ethical Machine Learning in Health

ArXiv Preprint

The use of machine learning (ML) in health care raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of health care. Specifically, we frame ethics of ML in health care through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to post-deployment considerations. We close by summarizing recommendations to address these challenges.

Irene Y. Chen, Emma Pierson, Sherri Rose, Shalmali Joshi, Kadija Ferryman, Marzyeh Ghassemi


General General

Isolating cost drivers in interstitial lung disease treatment using nonparametric Bayesian methods.

In Biometrical journal. Biometrische Zeitschrift

Mixture modeling is a popular approach to accommodate overdispersion, skewness, and multimodality features that are very common for health care utilization data. However, mixture modeling tends to rely on subjective judgment regarding the appropriate number of mixture components or some hypothesis about how to cluster the data. In this work, we adopt a nonparametric, variational Bayesian approach to allow the model to select the number of components while estimating their parameters. Our model allows for a probabilistic classification of observations into clusters and simultaneous estimation of a Gaussian regression model within each cluster. When we apply this approach to data on patients with interstitial lung disease, we find distinct subgroups of patients with differences in means and variances of health care costs, health and treatment covariates, and relationships between covariates and costs. The subgroups identified are readily interpretable, suggesting that this nonparametric variational approach to inference can discover valid insights into the factors driving treatment costs. Moreover, the learning algorithm we employed is very fast and scalable, which should make the technique accessible for a broad range of applications.

Kurz Christoph F, Stafford Seth


Bayesian statistics, health care costs, lung disease, mixture model, nonparametric models, variational Bayes

Radiology Radiology

A machine learning approach for magnetic resonance image-based mouse brain modeling and fast computation in controlled cortical impact.

In Medical & biological engineering & computing ; h5-index 32.0

Computational modeling of the brain is crucial for the study of traumatic brain injury. An anatomically accurate model with refined details could provide the most accurate computational results. However, computational models with fine mesh details could take prolonged computation time that impedes the clinical translation of the models. Therefore, a way to construct a model with low computational cost while maintaining a computational accuracy comparable with that of the high-fidelity model is desired. In this study, we constructed magnetic resonance (MR) image-based finite element (FE) models of a mouse brain for simulations of controlled cortical impact. The anatomical details were kept by mapping each image voxel to a corresponding FE mesh element. We constructed a super-resolution neural network that could produce computational results of a refined FE model with a mesh size of 70 μm from a coarse FE model with a mesh size of 280 μm. The peak signal-to-noise ratio of the reconstructed results was 33.26 dB, while the computational speed was increased by 50-fold. This proof-of-concept study showed that using machine learning techniques, MR image-based computational modeling could be applied and evaluated in a timely fashion. This paved ways for fast FE modeling and computation based on MR images. Results also support the potential clinical applications of MR image-based computational modeling of the human brain in a variety of scenarios such as brain impact and intervention.Graphical abstract MR image-based FE models with different mesh sizes were generated for CCI. The training and testing data sets were computed with 5 different impact locations and 3 different impact velocities. High-resolution strain maps were estimated using a SR neural network with greatly reduced computational cost.

Lai Changxin, Chen Yu, Wang Tianyao, Liu Jun, Wang Qian, Du Yiping, Feng Yuan


Controlled cortical impact, Image-based modeling, Mouse model, Neural network, Super resolution

General General

A memory optimization method combined with adaptive time-step method for cardiac cell simulation based on multi-GPU.

In Medical & biological engineering & computing ; h5-index 32.0

Cardiac electrophysiological simulation is a very complex computational process, which can be run on graphics processing unit (GPU) to save computational cost greatly. The use of adaptive time-step can further effectively speed up the simulation of heart cells. However, if the adaptive time-step method applies to GPU, it suffers synchronization problem on GPU, weakening the acceleration of adaptive time-step method. The previous work ran on a single GPU with the adaptive time-step to get only 1.5 times (× 1.5) faster than the fixed time-step. This study proposes a memory allocation method, which can effectively implement the adaptive time-step method on GPU. The proposed method mainly focuses on the stimulus point and potential memory arrangement in order to achieve optimal memory storage efficiency. All calculation is implemented on GPU. Large matrices such as potential are arranged in column order, and the cells on the left are stimulated. The Luo-Rudy passive (LR1) and dynamic (LRd) ventricular action potential models are used with adaptive time-step methods, such as the traditional hybrid method (THM) and Chen-Chen-Luo's (CCL) "quadratic adaptive algorithm" method. As LR1 is solved by the THM or CCL on a single GPU, the acceleration is × 34 and × 75 respectively compared with the fixed time-step. With 2 or 4 GPUs, the acceleration of the THM and CCL is × 34 or × 35 and × 73 or × 75, but it would decrease to × 5 or × 3 and × 20 or × 15 without optimization. In an LRd model, the acceleration reaches × 27 or × 85 as solved by the THM or CCL compared with the fixed time-step on multi-GPU with linear speed up increase versus the number of GPU. However, with the increase of GPUs number, the acceleration of the THM and CCL is continuously weakened before optimization. The mixed root mean square error (MRMSE) lower than 5% is applied to ensure the accuracy of simulation. The result shows that the proposed memory arrangement method can save computational cost a lot to speed up the heart simulation greatly. Graphical abstract Acceleration ratio compared with CPU with fixed time-step (dt = 0.001 ms).

Luo Ching-Hsing, Ye Haiyi, Chen Xingji


Adaptive time-step method, Computer simulation, High performance computing, Memory optimization, Ventricular cell

General General

Discriminating electrocardiographic responses to His-bundle pacing using machine learning.

In Cardiovascular digital health journal

Background : His-bundle pacing (HBP) has emerged as an alternative to conventional ventricular pacing because of its ability to deliver physiological ventricular activation. Pacing at the His bundle produces different electrocardiographic (ECG) responses: selective His-bundle pacing (S-HBP), non-selective His bundle pacing (NS-HBP), and myocardium-only capture (MOC). These 3 capture types must be distinguished from each other, which can be challenging and time-consuming even for experts.

Objective : The purpose of this study was to use artificial intelligence (AI) in the form of supervised machine learning using a convolutional neural network (CNN) to automate HBP ECG interpretation.

Methods : We identified patients who had undergone HBP and extracted raw 12-lead ECG data during S-HBP, NS-HBP, and MOC. A CNN was trained, using 3-fold cross-validation, on 75% of the segmented QRS complexes labeled with their capture type. The remaining 25% was kept aside as a testing dataset.

Results : The CNN was trained with 1297 QRS complexes from 59 patients. Cohen kappa for the neural network's performance on the 17-patient testing set was 0.59 (95% confidence interval 0.30 to 0.88; P <.0001), with an overall accuracy of 75%. The CNN's accuracy in the 17-patient testing set was 67% for S-HBP, 71% for NS-HBP, and 84% for MOC.

Conclusion : We demonstrated proof of concept that a neural network can be trained to automate discrimination between HBP ECG responses. When a larger dataset is trained to higher accuracy, automated AI ECG analysis could facilitate HBP implantation and follow-up and prevent complications resulting from incorrect HBP ECG analysis.

Arnold Ahran D, Howard James P, Gopi Aiswarya A, Chan Cheng Pou, Ali Nadine, Keene Daniel, Shun-Shin Matthew J, Ahmad Yousif, Wright Ian J, Ng Fu Siong, Linton Nick W F, Kanagaratnam Prapa, Peters Nicholas S, Rueckert Daniel, Francis Darrel P, Whinnett Zachary I

Artificial intelligence, Conduction system pacing, Electrocardiography, His-bundle pacing, Machine learning, Neural networks, Pacemakers

General General

Prediction of conversion to Alzheimer's disease using deep survival analysis of MRI images.

In Brain communications

The prediction of the conversion of healthy individuals and those with mild cognitive impairment to the status of active Alzheimer's disease is a challenging task. Recently, a survival analysis based upon deep learning was developed to enable predictions regarding the timing of an event in a dataset containing censored data. Here, we investigated whether a deep survival analysis could similarly predict the conversion to Alzheimer's disease. We selected individuals with mild cognitive impairment and cognitively normal subjects and used the grey matter volumes of brain regions in these subjects as predictive features. We then compared the prediction performances of the traditional standard Cox proportional-hazard model, the DeepHit model and our deep survival model based on a Weibull distribution. Our model achieved a maximum concordance index of 0.835, which was higher than that yielded by the Cox model and comparable to that of the DeepHit model. To our best knowledge, this is the first report to describe the application of a deep survival model to brain magnetic resonance imaging data. Our results demonstrate that this type of analysis could successfully predict the time of an individual's conversion to Alzheimer's disease.

Nakagawa Tomonori, Ishida Manabu, Naito Junpei, Nagai Atsushi, Yamaguchi Shuhei, Onoda Keiichi


Alzheimer’s disease, deep survival analysis, mild cognitive impairment, prediction of conversion