Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

A deep intronic TCTN2 variant activating a cryptic exon predicted by SpliceRover in a patient with Joubert syndrome.

In Journal of human genetics

The recent introduction of genome sequencing in genetic analysis has led to the identification of pathogenic variants located in deep introns. Recently, several new tools have emerged to predict the impact of variants on splicing. Here, we present a Japanese boy of Joubert syndrome with biallelic TCTN2 variants. Exome sequencing identified only a heterozygous maternal nonsense TCTN2 variant (NM_024809.5:c.916C >T, p.(Gln306Ter)). Subsequent genome sequencing identified a deep intronic variant (c.1033+423G>A) inherited from his father. The machine learning algorithms SpliceAI, Squirls, and Pangolin were unable to predict alterations in splicing by the c.1033+423G>A variant. SpliceRover, a tool for splice site prediction using FASTA sequence, was able to detect a cryptic exon which was 85-bp away from the variant and within the inverted Alu sequence while SpliceRover scores for these splice sites showed slight increase (donor) or decrease (acceptor) between the reference and mutant sequences. RNA sequencing and RT-PCR using urinary cells confirmed inclusion of the cryptic exon. The patient showed major symptoms of TCTN2-related disorders such as developmental delay, dysmorphic facial features and polydactyly. He also showed uncommon features such as retinal dystrophy, exotropia, abnormal pattern of respiration, and periventricular heterotopia, confirming these as one of features of TCTN2-related disorders. Our study highlights usefulness of genome sequencing and RNA sequencing using urinary cells for molecular diagnosis of genetic disorders and suggests that database of cryptic splice sites predicted in introns by SpliceRover using the reference sequences can be helpful in extracting candidate variants from large numbers of intronic variants in genome sequencing.

Hiraide Takuya, Shimizu Kenji, Okumura Yoshinori, Miyamoto Sachiko, Nakashima Mitsuko, Ogata Tsutomu, Saitsu Hirotomo

2023-Mar-10

Radiology Radiology

Deep Learning Body Region Classification of MRI and CT Examinations.

In Journal of digital imaging

This study demonstrates the high performance of deep learning in identification of body regions covering the entire human body from magnetic resonance (MR) and computed tomography (CT) axial images across diverse acquisition protocols and modality manufacturers. Pixel-based analysis of anatomy contained in image sets can provide accurate anatomic labeling. For this purpose, a convolutional neural network (CNN)-based classifier was developed to identify body regions in CT and MRI studies. Seventeen CT (18 MRI) body regions covering the entire human body were defined for the classification task. Three retrospective datasets were built for the AI model training, validation, and testing, with a balanced distribution of studies per body region. The test datasets originated from a different healthcare network than the train and validation datasets. Sensitivity and specificity of the classifier was evaluated for patient age, patient sex, institution, scanner manufacturer, contrast, slice thickness, MRI sequence, and CT kernel. The data included a retrospective cohort of 2891 anonymized CT cases (training, 1804 studies; validation, 602 studies; test, 485 studies) and 3339 anonymized MRI cases (training, 1911 studies; validation, 636 studies; test, 792 studies). Twenty-seven institutions from primary care hospitals, community hospitals, and imaging centers contributed to the test datasets. The data included cases of all sexes in equal proportions and subjects aged from 18 years old to + 90 years old. Image-level weighted sensitivity of 92.5% (92.1-92.8) for CT and 92.3% (92.0-92.5) for MRI and weighted specificity of 99.4% (99.4-99.5) for CT and 99.2% (99.1-99.2) for MRI were achieved. Deep learning models can classify CT and MR images by body region including lower and upper extremities with high accuracy.

Raffy Philippe, Pambrun Jean-François, Kumar Ashish, Dubois David, Patti Jay Waldron, Cairns Robyn Alexandra, Young Ryan

2023-Mar-09

Anatomy, CT, Classification, Deep learning, MRI, Machine learning, Medical imaging

General General

Preciseness of artificial intelligence for lateral cephalometric measurements.

In Journal of orofacial orthopedics = Fortschritte der Kieferorthopadie : Organ/official journal Deutsche Gesellschaft fur Kieferorthopadie

BACKGROUND : The aim of the study was to assess the accuracy and efficiency of a new artificial intelligence (AI) method in performing lateral cephalometric radiographic measurements.

MATERIALS AND METHODS : A total of 200 lateral cephalometric radiographs were assessed for quality and included. Three methods were used to perform the cephalometric measurements: (1) the AI method using WebCeph software (AssembleCircle Corp., Gyeonggi-do, Republic of Korea), (2) the modified AI method using WebCeph software after manual modification of the landmarks' position, and (3) using OnyxCeph software (Image Instruments GmbH, Chemnitz, Germany) by manual landmark identification and digital measurements generation. The results of the measurements produced by the three methods were compared, in addition to comparing the time required for the measurements' generation required for each method.

RESULTS : Statistically significant differences were detected between the measurements resulting from the three used methods. Fewer differences were detected between the modified AI method and the OnyxCeph method. The AI method produced the measurements the fastest followed by the modified AI method and then the OnyxCeph method.

CONCLUSIONS : Considering the used AI software, AI followed by manual tuning of the landmarks' position might be an accurate method in lateral cephalometric analysis. AI alone is still not fully reliable at locating the different landmarks on the lateral cephalometric radiographs.

El-Dawlatly Mostafa, Attia Khaled Hazem, Abdelghaffar Ahmed Yehia, Mostafa Yehya Ahmed, Abd El-Ghafour Mohamed

2023-Mar-09

Accuracy, Artificial intelligence, Diagnosis, Lateral cephalometry, Orthodontics

General General

Blood pressure stratification using photoplethysmography and light gradient boosting machine.

In Frontiers in physiology

Introduction: Globally, hypertension (HT) is a substantial risk factor for cardiovascular disease and mortality; hence, rapid identification and treatment of HT is crucial. In this study, we tested the light gradient boosting machine (LightGBM) machine learning method for blood pressure stratification based on photoplethysmography (PPG), which is used in most wearable devices. Methods: We used 121 records of PPG and arterial blood pressure (ABP) signals from the Medical Information Mart for Intensive Care III public database. PPG, velocity plethysmography, and acceleration plethysmography were used to estimate blood pressure; the ABP signals were used to determine the blood pressure stratification categories. Seven feature sets were established and used to train the Optuna-tuned LightGBM model. Three trials compared normotension (NT) vs. prehypertension (PHT), NT vs. HT, and NT + PHT vs. HT. Results: The F1 scores for these three classification trials were 90.18%, 97.51%, and 92.77%, respectively. The results showed that combining multiple features from PPG and its derivative led to a more accurate classification of HT classes than using features from only the PPG signal. Discussion: The proposed method showed high accuracy in stratifying HT risks, providing a noninvasive, rapid, and robust method for the early detection of HT, with promising applications in the field of wearable cuffless blood pressure measurement.

Hu Xudong, Yin Shimin, Zhang Xizhuang, Menon Carlo, Fang Cheng, Chen Zhencheng, Elgendi Mohamed, Liang Yongbo

2023

Optuna-tuned LightGBM, blood pressure monitoring, hypertension evaluation, machine learning, photoplethysmography, wearable devices

Public Health Public Health

Machine learning for accurate estimation of fetal gestational age based on ultrasound images.

In NPJ digital medicine

Accurate estimation of gestational age is an essential component of good obstetric care and informs clinical decision-making throughout pregnancy. As the date of the last menstrual period is often unknown or uncertain, ultrasound measurement of fetal size is currently the best method for estimating gestational age. The calculation assumes an average fetal size at each gestational age. The method is accurate in the first trimester, but less so in the second and third trimesters as growth deviates from the average and variation in fetal size increases. Consequently, fetal ultrasound late in pregnancy has a wide margin of error of at least ±2 weeks' gestation. Here, we utilise state-of-the-art machine learning methods to estimate gestational age using only image analysis of standard ultrasound planes, without any measurement information. The machine learning model is based on ultrasound images from two independent datasets: one for training and internal validation, and another for external validation. During validation, the model was blinded to the ground truth of gestational age (based on a reliable last menstrual period date and confirmatory first-trimester fetal crown rump length). We show that this approach compensates for increases in size variation and is even accurate in cases of intrauterine growth restriction. Our best machine-learning based model estimates gestational age with a mean absolute error of 3.0 (95% CI, 2.9-3.2) and 4.3 (95% CI, 4.1-4.5) days in the second and third trimesters, respectively, which outperforms current ultrasound-based clinical biometry at these gestational ages. Our method for dating the pregnancy in the second and third trimesters is, therefore, more accurate than published methods.

Lee Lok Hin, Bradburn Elizabeth, Craik Rachel, Yaqub Mohammad, Norris Shane A, Ismail Leila Cheikh, Ohuma Eric O, Barros Fernando C, Lambert Ann, Carvalho Maria, Jaffer Yasmin A, Gravett Michael, Purwar Manorama, Wu Qingqing, Bertino Enrico, Munim Shama, Min Aung Myat, Bhutta Zulfiqar, Villar Jose, Kennedy Stephen H, Noble J Alison, Papageorghiou Aris T

2023-Mar-09

Cardiology Cardiology

ECG-guided non-invasive estimation of pulmonary congestion in patients with heart failure.

In Scientific reports ; h5-index 158.0

Quantifying hemodynamic severity in patients with heart failure (HF) is an integral part of clinical care. A key indicator of hemodynamic severity is the mean Pulmonary Capillary Wedge Pressure (mPCWP), which is ideally measured invasively. Accurate non-invasive estimates of the mPCWP in patients with heart failure would help identify individuals at the greatest risk of a HF exacerbation. We developed a deep learning model, HFNet, that uses the 12-lead electrocardiogram (ECG) together with age and sex to identify when the mPCWP > 18 mmHg in patients who have a prior diagnosis of HF. The model was developed using retrospective data from the Massachusetts General Hospital and evaluated on both an internal test set and an independent external validation set, from another institution. We developed an uncertainty score that identifies when model performance is likely to be poor, thereby helping clinicians gauge when to trust a given model prediction. HFNet AUROC for the task of estimating mPCWP > 18 mmHg was 0.8 [Formula: see text] 0.01 and 0.[Formula: see text] 0.01 on the internal and external datasets, respectively. The AUROC on predictions with the highest uncertainty are 0.50 [Formula: see text] 0.02 (internal) and 0.[Formula: see text] 0.04 (external), while the AUROC on predictions with the lowest uncertainty were 0.86 ± 0.01 (internal) and 0.82 ± 0.01 (external). Using estimates of the prevalence of mPCWP > 18 mmHg in patients with reduced ventricular function, and a decision threshold corresponding to an 80% sensitivity, the calculated positive predictive value (PPV) is 0.[Formula: see text] 0.01when the corresponding chest x-ray (CXR) is consistent with interstitial edema HF. When the CXR is not consistent with interstitial edema, the estimated PPV is 0.[Formula: see text] 0.02, again at an 80% sensitivity threshold. HFNet can accurately predict elevated mPCWP in patients with HF using the 12-lead ECG and age/sex. The method also identifies cohorts in which the model is more/less likely to produce accurate outputs.

Raghu Aniruddh, Schlesinger Daphne, Pomerantsev Eugene, Devireddy Srikanth, Shah Pinak, Garasic Joseph, Guttag John, Stultz Collin M

2023-Mar-09