Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Public Health Public Health

Machine Learning to Differentiate Risk of Suicide Attempt and Self-harm After General Medical Hospitalization of Women With Mental Illness.

In Medical care

BACKGROUND : Suicide prevention is a public health priority, but risk factors for suicide after medical hospitalization remain understudied. This problem is critical for women, for whom suicide rates in the United States are disproportionately increasing.

OBJECTIVE : To differentiate the risk of suicide attempt and self-harm following general medical hospitalization among women with depression, bipolar disorder, and chronic psychosis.

METHODS : We developed a machine learning algorithm that identified risk factors of suicide attempt and self-harm after general hospitalization using electronic health record data from 1628 women in the University of California Los Angeles Integrated Clinical and Research Data Repository. To assess replicability, we applied the algorithm to a larger sample of 140,848 women in the New York City Clinical Data Research Network.

RESULTS : The classification tree algorithm identified risk groups in University of California Los Angeles Integrated Clinical and Research Data Repository (area under the curve 0.73, sensitivity 73.4, specificity 84.1, accuracy 0.84), and predictor combinations characterizing key risk groups were replicated in New York City Clinical Data Research Network (area under the curve 0.71, sensitivity 83.3, specificity 82.2, and accuracy 0.84). Predictors included medical comorbidity, history of pregnancy-related mental illness, age, and history of suicide-related behavior. Women with antecedent medical illness and history of pregnancy-related mental illness were at high risk (6.9%-17.2% readmitted for suicide-related behavior), as were women below 55 years old without antecedent medical illness (4.0%-7.5% readmitted).

CONCLUSIONS : Prevention of suicide attempt and self-harm among women following acute medical illness may be improved by screening for sex-specific predictors including perinatal mental health history.

Edgcomb Juliet B, Thiruvalluru Rohith, Pathak Jyotishman, Brooks John O

2021-Feb-01

Dermatology Dermatology

Integrated analysis of multi-omics data on epigenetic changes caused by combined exposure to environmental hazards.

In Environmental toxicology

Humans are easily exposed to environmentally hazardous factors in industrial sites or daily life. In addition, exposure to various substances and not just one harmful substance is common. However, research on the effects of combined exposure on humans is limited. Therefore, this study examined the effects of combined exposure to volatile organic compounds (VOCs) on the human body. We separated 193 participants into four groups according to their work-related exposure (nonexposure, toluene exposure, toluene and xylene exposure, and toluene, ethylbenzene, and xylene exposure). We then identified the methylation level and long noncoding RNA (lncRNA) levels by omics analyses, and performed an integrated analysis to examine the change of gene expression. Thereafter, the effects of combined exposure to environmental hazards on the human body were investigated and analyzed. Exposure to VOCs was found to negatively affect the development and maintenance of the nervous system. In particular, the MALAT1 lncRNA was found to be significantly reduced in the complex exposure group, and eight genes were significantly downregulated by DNA hypermethylation. The downregulation of these genes could cause a possible decrease in the density of synapses as well as the number and density of dendrites and spines. In summary, we found that increased combined exposure to environmental hazards could lead to additional epigenetic changes, and consequently abnormal dendrites, spines, and synapses, which could damage motor learning or spatial memory. Thus, lncRNA MALAT1 or FMR1 could be novel biomarkers of neurotoxicity to identify the negative health effects of VOC complex exposure.

Yu So Yeon, Koh Eun Jung, Kim Seung Hwan, Lee So Yul, Lee Ji Su, Son Sang Wook, Hwang Seung Yong

2021-Jan-13

combined exposure, epigenetic, long noncoding RNA, long-term depression, synapse

oncology Oncology

High quality proton portal imaging using deep learning for proton radiation therapy: a phantom study.

In Biomedical physics & engineering express

Purpose; For shoot-through proton treatments, like FLASH radiotherapy, there will be protons exiting the patient which can be used for proton portal imaging (PPI), revealing valuable information for the validation of tumor location in the beam's-eye-view at native gantry angles. However, PPI has poor inherent contrast and spatial resolution. To deal with this issue, we propose a deep-learning-based method to use kV digitally reconstructed radiographs (DRR) to improve PPI image quality. Method; We used a residual generative adversarial network (GAN) framework to learn the nonlinear mapping between PPIs and DRRs. Residual blocks were used to force the model to focus on the structural differences between DRR and PPI. To assess the accuracy of our method, we used 149 images for training and 30 images for testing. PPIs were acquired using a double-scattered proton beam. The DRRs acquired from CT acted as learning targets in the training process and were used to evaluate results from the proposed method using a six-fold cross-validation scheme. Results; Qualitatively, the corrected PPIs showed enhanced spatial resolution and captured fine details present in the DRRs that are missed in the PPIs. The quantitative results for corrected PPIs show average normalized mean error (NME), normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index of -0.1%, 0.3%, 39.14 dB, and 0.987, respectively. Conclusion; The results indicate the proposed method can generate high quality corrected PPIs and this work shows the potential to use a deep-learning model to make PPI available in proton radiotherapy. This will allow for beam's-eye-view (BEV) imaging with the particle used for treatment, leading to a valuable alternative to orthogonal x-rays or cone-beam CT for patient position verification.

Charyyev Serdar, Lei Yang, Harms Joseph, Eaton Bree, McDonald Mark, Curran Walter J, Liu Tian, Zhou Jun, Zhang Rongxiao, Yang Xiaofeng

2020-Apr-27

General General

Human respiration monitoring using infrared thermography and artificial intelligence.

In Biomedical physics & engineering express

The respiration rate (RR) is the most vital parameter used for the determination of human health. The most widely adopted techniques, used to monitor the RR are contact in nature and face many drawbacks. This paper reports the use of Infrared Thermography, in reliably monitoring the RR in a contact-less and non-invasive way. A thermal camera is used to monitor the variation in nasal temperature during respiration continuously. Further, the nostrils (region of interest) are tracked during head motion and object occlusion, by implementing a computer vision algorithm that makes use of 'Histogram of oriented gradients' and 'Support vector machine' (SVM). The signal to noise ratio (SNR) of the acquired breathing signals is very low; hence they are subjected to appropriate filtering methods. The filters are compared depending on the performance metrics such as SNR and Mean square error. The breaths per minute are obtained without any manual intervention by implementing the 'Breath detection algorithm' (BDA). This algorithm is implemented on 150 breathing signals and its performance is determined by computing the parameters such as Precision, Sensitivity, Spurious cycle rate, and Missed cycle rate values, obtained as 98.6%, 97.2%, 1.4%, and 2.8% respectively. The parameters obtained from the BDA are fed to the k-Nearest Neighbour (k-NN) and SVM classifiers, that determine whether the human volunteers have abnormal or normal respiration, or have Bradypnea (slow breathing), or Tachypnea (fast breathing). The Validation accuracies obtained are 96.25% and 99.5% with Training accuracies 97.75% and 99.4% for SVM and k-NN classifiers respectively. The Testing accuracies of the completely built SVM and k-NN classifiers are 96% and 99%, respectively. The various performance metrics like Sensitivity, Specificity, Precision, G-mean and F-measure are calculated as well, for every class, for both the classifiers. Finally, the Standard deviation values of the SVM and k-NN classifiers are computed and are obtained as 0.022 and 0.007, respectively. It is observed that the k-NN classifier shows a better performance compared to the SVM classifier. The pattern between the data points fed to the classifiers is viewed by making use of the t-Stochastic Neighbor Embedding algorithm. It is noticed from these plots that the separation between the data points belonging to different classes, improves and shows minimal overlap by increasing the perplexity value and number of iterations.

Jagadev Preeti, Giri Lalat Indu

2020-Mar-13

General General

Machine learning-based motor assessment of Parkinson's disease using postural sway, gait and lifestyle features on crowdsourced smartphone data.

In Biomedical physics & engineering express

OBJECTIVES : Remote assessment of gait in patients' homes has become a valuable tool for monitoring the progression of Parkinson's disease (PD). However, these measurements are often not as accurate or reliable as clinical evaluations because it is challenging to objectively distinguish the unique gait characteristics of PD. We explore the inference of patients' stage of PD from their gait using machine learning analyses of data gathered from their smartphone sensors. Specifically, we investigate supervised machine learning (ML) models to classify the severity of the motor part of the UPDRS (MDS-UPDRS 2.10-2.13). Our goals are to facilitate remote monitoring of PD patients and to answer the following questions: (1) What is the patient PD stage based on their gait? (2) Which features are best for understanding and classifying PD gait severities? (3) Which ML classifier types best discriminate PD patients from healthy controls (HC)? and (4) Which ML classifier types can discriminate the severity of PD gait anomalies?

METHODOLOGY : Our work uses smartphone sensor data gathered from 9520 patients in the mPower study, of whom 3101 participants uploaded gait recordings and 344 subjects and 471 controls uploaded at least 3 walking activities. We selected 152 PD patients who performed at least 3 recordings before and 3 recordings after taking medications and 304 HC who performed at least 3 walking recordings. From the accelerometer and gyroscope sensor data, we extracted statistical, time, wavelet and frequency domain features, and other lifestyle features were derived directly from participants' survey data. We conducted supervised classification experiments using 10-fold cross-validation and measured the model precision, accuracy, and area under the curve (AUC).

RESULTS : The best classification model, best feature, highest classification accuracy, and AUC were (1) random forest and entropy rate, 93% and 0.97, respectively, for walking balance (MDS-UPDRS-2.12); (2) bagged trees and MinMaxDiff, 95% and 0.92, respectively, for shaking/tremor (MDS-UPDRS-2.10); (3) bagged trees and entropy rate, 98% and 0.98, respectively, for freeze of gait; and (4) random forest and MinMaxDiff, 95% and 0.99, respectively, for distinguishing PD patients from HC.

CONCLUSION : Machine learning classification was challenging due to the use of data that were subjectively labeled based on patients' answers to the MDS-UPDRS survey questions. However, with use of a significantly larger number of subjects than in prior work and clinically validated gait features, we were able to demonstrate that automatic patient classification based on smartphone sensor data can be used to objectively infer the severity of PD and the extent of specific gait anomalies.

Abujrida Hamza, Agu Emmanuel, Pahlavan Kaveh

2020-Mar-04

General General

Automated segmentation of the left ventricle from MR cine imaging based on deep learning architecture.

In Biomedical physics & engineering express

BACKGROUND : Magnetic resonance cine imaging is the accepted standard for cardiac functional assessment. Left ventricular (LV) segmentation plays a key role in volumetric functional quantification of the heart. Conventional manual analysis is time-consuming and observer-dependent. Automated segmentation approaches are needed to improve the clinical workflow of cardiac functional quantification. Recently, deep-learning networks have shown promise for efficient LV segmentation.

PURPOSE : The routinely used V-Net is a convolutional network that segments images by passing features from encoder to decoder. In this study, this method was advanced as DenseV-Net by replacing the convolutional block with a densely connected algorithm and dense calculations to alleviate the vanishing-gradient problem, prevent exploding gradients, and to strengthen feature propagation. Thirty patients were scanned with a 3 Tesla MR imager. ECG-free, free-breathing, real-time cines were acquired with a balanced steady-state free precession technique. Linear regression and the dice similarity coefficient (DSC) were performed to evaluate LV segmentation performance of the classic neural networks FCN, UNet, V-Net, and the proposed DenseV-net methods, using manual analysis as the reference. Slice-based LV function was compared among the four methods.

RESULTS : Thirty slices from eleven patients were randomly selected (each slice contained 73 images), and the LVs were segmented using manual analysis, UNet, FCN, V-Net, and the proposed DenseV-Net methods. A strong correlation of the left ventricular areas was observed between the proposed DenseV-Net network and manual segmentation (R = 0.92), with a mean DSC of 0.90 ± 0.12. A weaker correlation was found between the routine V-Net, UNet, FCN, and manual segmentation methods (R = 0.77, 0.74, 0.76, respectively) with a lower mean DSC (0.85 ± 0.13, 0.84 ± 0.16, 0.79 ± 0.17, respectively). Additionally, the proposed DenseV-Net method was strongly correlated with the manual analysis in slice-based LV function quantification compared with the state-of-art neural network methods V-Net, UNet, and FCN.

CONCLUSION : The proposed DenseV-Net method outperforms the classic convolutional networks V-Net, UNet, and FCN in automated LV segmentation, providing a novel way for efficient heart functional quantification and the diagnosis of cardiac diseases using cine MRI.

Qin Wenjian, Wu Yin, Li Siyue, Chen Yucheng, Yang Yongfeng, Liu Xin, Zheng Hairong, Liang Dong, Hu Zhanli

2020-Feb-18