Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Cardiology Cardiology

Role of artificial intelligence and machine learning in interventional cardiology.

In Current problems in cardiology

Directed by two decades of technological processes and remodeling, the dynamic quality of healthcare data combined with the progress of computational power has allowed for rapid progress in artificial intelligence (AI). In interventional cardiology, AI has shown potential in providing data interpretation and automated analysis from electrocardiogram (ECG), echocardiography, computed tomography angiography (CTA), magnetic resonance imaging (MRI), and electronic patient data. Clinical decision support has the potential to assist in improving patient safety and making prognostic and diagnostic conjectures in interventional cardiology procedures. Robot-assisted percutaneous coronary intervention (R-PCI), along with functional and quantitative assessment of coronary artery ischemia and plaque burden on intravascular ultrasound (IVUS), are the major applications of AI. Machine learning (ML) algorithms are used in these applications, and they have the potential to bring a paradigm shift in intervention. Recently, an efficient branch of ML has emerged as a deep learning algorithm for numerous cardiovascular (CV) applications. However, the impact DL on the future of cardiology practice is not clear. Predictive models based on DL have several limitations including low generalizability and decision processing in cardiac anatomy.

Subhan Shoaib, Malik Jahanzeb, Haq Abair Ul, Qadeer Muhammad Saad, Zaidi Syed Muhammad Jawad, Orooj Fizza, Zaman Hafsa, Mehmoodi Amin, Majeedi Umaid

2023-Mar-13

Deep learning, cardiology, coronary angiography, neural network

oncology Oncology

Machine learning predicts cardiovascular events in patients with diabetes: The Silesia Diabetes-Heart Project.

In Current problems in cardiology

We aimed to develop a machine learning (ML) model for predicting cardiovascular (CV) events in patients with diabetes (DM). This was a prospective, observational study where clinical data of patients with diabetes hospitalized in the diabetology center in Poland (years 2015 - 2020) were analyzed using ML. The occurrence of new CV events following discharge was collected in the follow-up time for up to 5 years and 9 months. An end-to-end ML technique which exploits the neighborhood component analysis for elaborating discriminative predictors, followed by a hybrid sampling/boosting classification algorithm, multiple logistic regression, or unsupervised hierarchical clustering was proposed. In 1735 patients with diabetes (53% female), there were 150 (8.65%) ones with a new CV event in the follow-up. Twelve most discriminative patients' parameters included coronary artery disease, heart failure, peripheral artery disease, stroke, diabetic foot disease, chronic kidney disease, eosinophil count, serum potassium level, and being treated with clopidogrel, heparin, proton pump inhibitor, and loop diuretic. Utilizing those variables resulted in the area under the receiver operating characteristic curve (AUC) ranging from 0.62 (95% Confidence Interval [CI] 0.56-0.68, p<0.01) to 0.72 (95%CI 0.66-0.77, p<0.01) across five non-overlapping test folds, whereas multiple logistic regression correctly determined 111/150 (74.00%) high-risk patients, and 989/1585 (62.40%) low-risk patients, resulting in 1100/1735 (63.40%) correctly classified patients (AUC: 0.72, 95%CI 0.66-0.77). ML algorithms can identify patients with diabetes at a high risk of new CV events based on a small number of interpretable and easy-to-obtain patients' parameters.

Nabrdalik Katarzyna, Kwiendacz Hanna, Drożdż Karolina, Irlik Krzysztof, Hendel Mirela, Wijata Agata M, Nalepa Jakub, Correa Elon, Hajzler Weronika, Janota Oliwia, Wójcik Wiktoria, Gumprecht Janusz, Lip Gregory Y H

2023-Mar-13

Cardiovascular Disease, Diabetes, Machine Learning, Prediction Model, Risk Factors

oncology Oncology

Investigation of liquid biopsy analytes in peripheral blood of individuals after SARS-CoV-2 infection.

In EBioMedicine

BACKGROUND : Post-acute COVID-19 syndrome (PACS) is linked to severe organ damage. The identification and stratification of at-risk SARS-CoV-2 infected individuals is vital to providing appropriate care. This exploratory study looks for a potential liquid biopsy signal for PACS using both manual and machine learning approaches.

METHODS : Using a high definition single cell assay (HDSCA) workflow for liquid biopsy, we analysed 100 Post-COVID patients and 19 pre-pandemic normal donor (ND) controls. Within our patient cohort, 73 had received at least 1 dose of vaccination prior to SARS-CoV-2 infection. We stratified the COVID patients into 25 asymptomatic, 22 symptomatic COVID-19 but not suspected for PACS and 53 PACS suspected. All COVID-19 patients investigated in this study were diagnosed between April 2020 and January 2022 with a median 243 days (range 16-669) from diagnosis to their blood draw. We did a histopathological examination of rare events in the peripheral blood and used a machine learning model to evaluate predictors of PACS.

FINDINGS : The manual classification found rare cellular and acellular events consistent with features of endothelial cells and platelet structures in the PACS-suspected cohort. The three categories encompassing the hypothesised events were observed at a significantly higher incidence in the PACS-suspected cohort compared to the ND (p-value < 0.05). The machine learning classifier performed well when separating the NDs from Post-COVID with an accuracy of 90.1%, but poorly when separating the patients suspected and not suspected of PACS with an accuracy of 58.7%.

INTERPRETATION : Both the manual and the machine learning model found differences in the Post-COVID cohort and the NDs, suggesting the existence of a liquid biopsy signal after active SARS-CoV-2 infection. More research is needed to stratify PACS and its subsyndromes.

FUNDING : This work was funded in whole or in part by Fulgent Genetics, Kathy and Richard Leventhal and Vassiliadis Research Fund. This work was also supported by the National Cancer InstituteU54CA260591.

Qi Elizabeth, Courcoubetis George, Liljegren Emmett, Herrera Ergueen, Nguyen Nathalie, Nadri Maimoona, Ghandehari Sara, Kazemian Elham, Reckamp Karen L, Merin Noah M, Merchant Akil, Mason Jeremy, Figueiredo Jane C, Shishido Stephanie N, Kuhn Peter

2023-Mar-13

COVID-19, Liquid biopsy, Long COVID, Post-COVID sequelae, Post-acute COVID-19 syndrome (PACS), SARS-CoV-2

Radiology Radiology

A microstructure estimation Transformer inspired by sparse representation for diffusion MRI.

In Medical image analysis

Diffusion magnetic resonance imaging (dMRI) is an important tool in characterizing tissue microstructure based on biophysical models, which are typically multi-compartmental models with mathematically complex and highly non-linear forms. Resolving microstructures from these models with conventional optimization techniques is prone to estimation errors and requires dense sampling in the q-space with a long scan time. Deep learning based approaches have been proposed to overcome these limitations. Motivated by the superior performance of the Transformer in feature extraction than the convolutional structure, in this work, we present a learning-based framework based on Transformer, namely, a Microstructure Estimation Transformer with Sparse Coding (METSC) for dMRI-based microstructural parameter estimation. To take advantage of the Transformer while addressing its limitation in large training data requirement, we explicitly introduce an inductive bias-model bias into the Transformer using a sparse coding technique to facilitate the training process. Thus, the METSC is composed with three stages, an embedding stage, a sparse representation stage, and a mapping stage. The embedding stage is a Transformer-based structure that encodes the signal in a high-level space to ensure the core voxel of a patch is represented effectively. In the sparse representation stage, a dictionary is constructed by solving a sparse reconstruction problem that unfolds the Iterative Hard Thresholding (IHT) process. The mapping stage is essentially a decoder that computes the microstructural parameters from the output of the second stage, based on the weighted sum of normalized dictionary coefficients where the weights are also learned. We tested our framework on two dMRI models with downsampled q-space data, including the intravoxel incoherent motion (IVIM) model and the neurite orientation dispersion and density imaging (NODDI) model. The proposed method achieved up to 11.25 folds of acceleration while retaining high fitting accuracy for NODDI fitting, reducing the mean squared error (MSE) up to 70% compared with the previous q-space learning approach. METSC outperformed the other state-of-the-art learning-based methods, including the model-free and model-based methods. The network also showed robustness against noise and generalizability across different datasets. The superior performance of METSC indicates its potential to improve dMRI acquisition and model fitting in clinical applications.

Zheng Tianshu, Yan Guohui, Li Haotian, Zheng Weihao, Shi Wen, Zhang Yi, Ye Chuyang, Wu Dan

2023-Mar-01

Diffusion MRI, Microstructural model, Sparse coding, Transformer

General General

An unsupervised learning approach to diagnosing Alzheimer's disease using brain magnetic resonance imaging scans.

In International journal of medical informatics ; h5-index 49.0

BACKGROUND : Alzheimer's disease (AD) is the most common cause of dementia, characterised by behavioural and cognitive impairment. Due to the lack of effectiveness of manual diagnosis by doctors, machine learning is now being applied to diagnose AD in many recent studies. Most research developing machine learning algorithms to diagnose AD use supervised learning to classify magnetic resonance imaging (MRI) scans. However, supervised learning requires a considerable volume of labelled data and MRI scans are difficult to label.

OBJECTIVE : This study applied a statistical method and unsupervised learning methods to discriminate between scans from cognitively normal (CN) and people with AD using a limited number of labelled structural MRI scans.

METHODS : We used two-sample t-tests to detect the AD-relevant regions, and then employed an unsupervised learning neural network to extract features from the regions. Finally, a clustering algorithm was implemented to discriminate between CN and AD data based on the extracted features. The approach was tested on baseline brain structural MRI scans from 429 individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI), of which 231 were CN and 198 had AD.

RESULTS : The abnormal regions around the lower parts of limbic system were indicated as AD-relevant regions based on the two-sample t-test (p < 0.001), and the proposed method yielded an accuracy of 0.84 for discriminating between CN and AD.

CONCLUSION : The study combined statistical and unsupervised learning methods to identify scans of people with AD. This method can detect AD-relevant regions and could be used to accurately diagnose AD; it does not require large amounts of labelled MRI scans. Our research could help in the automatic diagnosis of AD and provide a basis for diagnosing stable mild cognitive impairment (stable MCI) and progressive mild cognitive impairment (progressive MCI).

Liu Yuyang, Mazumdar Suvodeep, Bath Peter A

2023-Mar-02

Alzheimer’s disease, Deep learning, MRI, Machine learning, Unsupervised learning

General General

OCT2Former: A retinal OCT-angiography vessel segmentation transformer.

In Computer methods and programs in biomedicine

BACKGROUND AND OBJECTIVE : Retinal vessel segmentation plays an important role in the automatic retinal disease screening and diagnosis. How to segment thin vessels and maintain the connectivity of vessels are the key challenges of the retinal vessel segmentation task. Optical coherence tomography angiography (OCTA) is a noninvasive imaging technique that can reveal high-resolution retinal vessels. Aiming at make full use of its characteristic of high resolution, a new end-to-end transformer based network named as OCT2Former (OCT-a Transformer) is proposed to segment retinal vessel accurately in OCTA images.

METHODS : The proposed OCT2Former is based on encoder-decoder structure, which mainly includes dynamic transformer encoder and lightweight decoder. Dynamic transformer encoder consists of dynamic token aggregation transformer and auxiliary convolution branch, in which the multi-head dynamic token aggregation attention based dynamic token aggregation transformer is designed to capture the global retinal vessel context information from the first layer throughout the network and the auxiliary convolution branch is proposed to compensate for the lack of inductive bias of the transformer and assist in the efficient feature extraction. A convolution based lightweight decoder is proposed to decode features efficiently and reduce the complexity of the proposed OCT2Former.

RESULTS : The proposed OCT2Former is validated on three publicly available datasets i.e. OCTA-SS, ROSE-1, OCTA-500 (subset OCTA-6M and OCTA-3M). The Jaccard indexes of the proposed OCT2Former on these datasets are 0.8344, 0.7855, 0.8099 and 0.8513, respectively, outperforming the best convolution based network 1.43, 1.32, 0.75 and 1.46%, respectively.

CONCLUSION : The experimental results have demonstrated that the proposed OCT2Former can achieve competitive performance on retinal OCTA vessel segmentation tasks.

Tan Xiao, Chen Xinjian, Meng Qingquan, Shi Fei, Xiang Dehui, Chen Zhongyue, Pan Lingjiao, Zhu Weifang

2023-Mar-05

Deep learning, Dynamic token aggregation, Optical coherence tomography angiography, Retinal vessel segmentation, Transformer