Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Prioritizing and characterizing functionally relevant genes across human tissues.

In PLoS computational biology

Knowledge of genes that are critical to a tissue's function remains difficult to ascertain and presents a major bottleneck toward a mechanistic understanding of genotype-phenotype links. Here, we present the first machine learning model-FUGUE-combining transcriptional and network features, to predict tissue-relevant genes across 30 human tissues. FUGUE achieves an average cross-validation auROC of 0.86 and auPRC of 0.50 (expected 0.09). In independent datasets, FUGUE accurately distinguishes tissue or cell type-specific genes, significantly outperforming the conventional metric based on tissue-specific expression alone. Comparison of tissue-relevant transcription factors across tissue recapitulate their developmental relationships. Interestingly, the tissue-relevant genes cluster on the genome within topologically associated domains and furthermore, are highly enriched for differentially expressed genes in the corresponding cancer type. We provide the prioritized gene lists in 30 human tissues and an open-source software to prioritize genes in a novel context given multi-sample transcriptomic data.

Somepalli Gowthami, Sahoo Sarthak, Singh Arashdeep, Hannenhalli Sridhar

2021-Jul

Surgery Surgery

Using wearable technology to detect prescription opioid self-administration.

In Pain ; h5-index 71.0

Appropriate monitoring of opioid use in patients with pain conditions is paramount, yet it remains a very challenging task. The current work examined the use of a wearable sensor to detect self-administration of opioids after dental surgery using machine learning. Participants were recruited from an oral and maxillofacial surgery clinic. Participants were 46 adult patients (26 female) receiving opioids after dental surgery. Participants wore Empatica E4 sensors during the period they self-administered opioids. The E4 collected physiological parameters including accelerometer x-, y-, and z-axes, heart rate, and electrodermal activity. Four machine learning models provided validation accuracies greater than 80%, but the bagged-tree model provided the highest combination of validation accuracy (83.7%) and area under the receiver operating characteristic curve (0.92). The trained model had a validation sensitivity of 82%, a specificity of 85%, a positive predictive value of 85%, and a negative predictive value of 83%. A subsequent test of the trained model on withheld data had a sensitivity of 81%, a specificity of 88%, a positive predictive value of 87%, and a negative predictive value of 82%. Results from training and testing model of machine learning indicated that opioid self-administration could be identified with reasonable accuracy, leading to considerable possibilities of the use of wearable technology to advance prevention and treatment.

Salgado GarcĂ­a Francisco I, Indic Premananda, Stapp Joshua, Chintha Keerthi K, He Zhaomin, Brooks Jeffrey H, Carreiro Stephanie, Derefinko Karen J

2021-Jun-14

Radiology Radiology

Machine Learning and Deep Learning in Oncologic Imaging: Potential Hurdles, Opportunities for Improvement, and Solutions-Abdominal Imagers' Perspective.

In Journal of computer assisted tomography

The applications of machine learning in clinical radiology practice and in particular oncologic imaging practice are steadily evolving. However, there are several potential hurdles for widespread implementation of machine learning in oncologic imaging, including the lack of availability of a large number of annotated data sets and lack of use of consistent methodology and terminology for reporting the findings observed on the staging and follow-up imaging studies that apply to a wide spectrum of solid tumors. This short review discusses some potential hurdles to the implementation of machine learning in oncologic imaging, opportunities for improvement, and potential solutions that can facilitate robust machine learning from the vast number of radiology reports and annotations generated by the dictating radiologists.

Yedururi Sireesha, Morani Ajaykumar C, Katabathina Venkata Subbiah, Jo Nahyun, Rachamallu Medhini, Prasad Srinivasa, Marcal Leonardo

2021-Jul-13

General General

Machine learning algorithms vs. thresholding to segment ischemic regions in patients with acute ischemic stroke.

In IEEE journal of biomedical and health informatics

OBJECTIVE : Computed tomography (CT) scan is a fast and widely used modality for early assessment in patients with symptoms of a cerebral ischemic stroke. CT perfusion (CTP) is often added to the protocol and is used by radiologists for assessing the severity of the stroke. Standard parametric maps are calculated from the CTP datasets. Based on parametric value combinations, ischemic regions are separated into presumed infarct core (irreversibly damaged tissue) and penumbra (tissue-at-risk). Different thresholding approaches have been suggested to segment the parametric maps into these areas. The purpose of this study is to compare fully-automated methods based on machine learning and thresholding approaches to segment the hypoperfused regions in patients with ischemic stroke.

METHODS : We test two different architectures with three mainstream machine learning algorithms. We use parametric maps, as input features, and manual annotations made by two expert neuroradiologists as ground truth.

RESULTS : The best results are produced with random forest (RF) and Single-Step approach; we achieve an average Dice coefficient of 0.68 and 0.26, respectively for penumbra and core, for the three groups analysed. We also achieve an average in volume difference of 25.1ml for penumbra and 7.8ml for core.

CONCLUSIONS : Our best RF-based method outperforms the classical thresholding approaches, to segment both the ischemic regions in a group of patients regardless of the severity of vessel occlusion.

SIGNIFICANCE : A correct visualization of the ischemic regions will guide treatment decision better.

Tomasetti Luca, Hllesli Liv Jorunn, Engan Kjersti, Kurz Kathinka Dhli, Kurz Martin Wilhelm, Khanmohammadi Mahdieh

2021-Jul-16

General General

CLEAR: Comprehensive Learning Enabled Adversarial Reconstruction for Subtle Structure Enhanced Low-Dose CT Imaging.

In IEEE transactions on medical imaging ; h5-index 74.0

X-ray computed tomography (CT) is of great clinical significance in medical practice because it can provide anatomical information about the human body without invasion, while its radiation risk has continued to attract public concerns. Reducing the radiation dose may induce noise and artifacts to the reconstructed images, which will interfere with the judgments of radiologists. Previous studies have confirmed that deep learning (DL) is promising for improving low-dose CT imaging. However, almost all the DL-based methods suffer from subtle structure degeneration and blurring effect after aggressive denoising, which has become the general challenging issue. This paper develops the Comprehensive Learning Enabled Adversarial Reconstruction (CLEAR) method to tackle the above problems. CLEAR achieves subtle structure enhanced low-dose CT imaging through a progressive improvement strategy. First, the generator established on the comprehensive domain can extract more features than the one built on degraded CT images and directly map raw projections to high-quality CT images, which is significantly different from the routine GAN practice. Second, a multi-level loss is assigned to the generator to push all the network components to be updated towards high-quality reconstruction, preserving the consistency between generated images and gold-standard images. Finally, following the WGAN-GP modality, CLEAR can migrate the real statistical properties to the generated images to alleviate over-smoothing. Qualitative and quantitative analyses have demonstrated the competitive performance of CLEAR in terms of noise suppression, structural fidelity and visual perception improvement.

Zhang Yikun, Hu Dianlin, Zhao Qianlong, Quan Guotao, Liu Jin, Liu Qiegeng, Zhang Yi, Coatrieux Gouenou, Chen Yang, Yu Hengyong

2021-Jul-16

General General

STSRNet: Deep Joint Space-Time Super-Resolution for Vector Field Visualization.

In IEEE computer graphics and applications

We propose STSRNet, a joint space-time super-resolution deep learning based model for time-varying vector field data. Our method is designed to reconstruct high temporal resolution (HTR) and high spatial resolution (HSR) vector fields sequence from the corresponding low-resolution key frames. For large scale simulations, only data from a subset of time steps with reduced spatial resolution can be stored for post-hoc analysis. In this paper, we leverage a deep learning model to capture the non-linear complex changes of vector field data with a two-stage architecture: the first stage deforms a pair of low spatial resolution (LSR) key frames forward and backward to generate the intermediate LSR frames, and the second stage performs spatial super-resolution to output the high-resolution sequence. Our method is scalable and can handle different data sets. We demonstrate the effectiveness of our framework with several data sets through quantitative and qualitative evaluations.

An Yifei, Shen Han-Wei, Shan Guihua, Li Guan, Liu Jun

2021-Jul-16