Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Pathology Pathology

Uncovering spatiotemporal patterns of atrophy in progressive supranuclear palsy using unsupervised machine learning.

In Brain communications

To better understand the pathological and phenotypic heterogeneity of progressive supranuclear palsy and the links between the two, we applied a novel unsupervised machine learning algorithm (Subtype and Stage Inference) to the largest MRI data set to date of people with clinically diagnosed progressive supranuclear palsy (including progressive supranuclear palsy-Richardson and variant progressive supranuclear palsy syndromes). Our cohort is comprised of 426 progressive supranuclear palsy cases, of which 367 had at least one follow-up scan, and 290 controls. Of the progressive supranuclear palsy cases, 357 were clinically diagnosed with progressive supranuclear palsy-Richardson, 52 with a progressive supranuclear palsy-cortical variant (progressive supranuclear palsy-frontal, progressive supranuclear palsy-speech/language, or progressive supranuclear palsy-corticobasal), and 17 with a progressive supranuclear palsy-subcortical variant (progressive supranuclear palsy-parkinsonism or progressive supranuclear palsy-progressive gait freezing). Subtype and Stage Inference was applied to volumetric MRI features extracted from baseline structural (T1-weighted) MRI scans and then used to subtype and stage follow-up scans. The subtypes and stages at follow-up were used to validate the longitudinal consistency of subtype and stage assignments. We further compared the clinical phenotypes of each subtype to gain insight into the relationship between progressive supranuclear palsy pathology, atrophy patterns, and clinical presentation. The data supported two subtypes, each with a distinct progression of atrophy: a 'subcortical' subtype, in which early atrophy was most prominent in the brainstem, ventral diencephalon, superior cerebellar peduncles, and the dentate nucleus, and a 'cortical' subtype, in which there was early atrophy in the frontal lobes and the insula alongside brainstem atrophy. There was a strong association between clinical diagnosis and the Subtype and Stage Inference subtype with 82% of progressive supranuclear palsy-subcortical cases and 81% of progressive supranuclear palsy-Richardson cases assigned to the subcortical subtype and 82% of progressive supranuclear palsy-cortical cases assigned to the cortical subtype. The increasing stage was associated with worsening clinical scores, whilst the 'subcortical' subtype was associated with worse clinical severity scores compared to the 'cortical subtype' (progressive supranuclear palsy rating scale and Unified Parkinson's Disease Rating Scale). Validation experiments showed that subtype assignment was longitudinally stable (95% of scans were assigned to the same subtype at follow-up) and individual staging was longitudinally consistent with 90% remaining at the same stage or progressing to a later stage at follow-up. In summary, we applied Subtype and Stage Inference to structural MRI data and empirically identified two distinct subtypes of spatiotemporal atrophy in progressive supranuclear palsy. These image-based subtypes were differentially enriched for progressive supranuclear palsy clinical syndromes and showed different clinical characteristics. Being able to accurately subtype and stage progressive supranuclear palsy patients at baseline has important implications for screening patients on entry to clinical trials, as well as tracking disease progression.

Scotton William J, Shand Cameron, Todd Emily, Bocchetta Martina, Cash David M, VandeVrede Lawren, Heuer Hilary, Young Alexandra L, Oxtoby Neil, Alexander Daniel C, Rowe James B, Morris Huw R, Boxer Adam L, Rohrer Jonathan D, Wijeratne Peter A

2023

Subtype and Stage Inference, biomarkers, disease progression, machine learning, progressive supranuclear palsy

Ophthalmology Ophthalmology

CMS-NET: deep learning algorithm to segment and quantify the ciliary muscle in swept-source optical coherence tomography images.

In Therapeutic advances in chronic disease

BACKGROUND : The ciliary muscle plays a role in changing the shape of the crystalline lens to maintain the clear retinal image during near work. Studying the dynamic changes of the ciliary muscle during accommodation is necessary for understanding the mechanism of presbyopia. Optical coherence tomography (OCT) has been frequently used to image the ciliary muscle and its changes during accommodation in vivo. However, the segmentation process is cumbersome and time-consuming due to the large image data sets and the impact of low imaging quality.

OBJECTIVES : This study aimed to establish a fully automatic method for segmenting and quantifying the ciliary muscle on the basis of optical coherence tomography (OCT) images.

DESIGN : A perspective cross-sectional study.

METHODS : In this study, 3500 signed images were used to develop a deep learning system. A novel deep learning algorithm was created from the widely used U-net and a full-resolution residual network to realize automatic segmentation and quantification of the ciliary muscle. Finally, the algorithm-predicted results and manual annotation were compared.

RESULTS : For segmentation performed by the system, the total mean pixel value difference (PVD) was 1.12, and the Dice coefficient, intersection over union (IoU), and sensitivity values were 93.8%, 88.7%, and 93.9%, respectively. The performance of the system was comparable with that of experienced specialists. The system could also successfully segment ciliary muscle images and quantify ciliary muscle thickness changes during accommodation.

CONCLUSION : We developed an automatic segmentation framework for the ciliary muscle that can be used to analyze the morphological parameters of the ciliary muscle and its dynamic changes during accommodation.

Chen Wen, Yu Xiangle, Ye Yiru, Gao Hebei, Cao Xinyuan, Lin Guangqing, Zhang Riyan, Li Zixuan, Wang Xinmin, Zhou Yuheng, Shen Meixiao, Shao Yilei

2023

accommodation, ciliary muscle, deep learning, optical coherence tomography (OCT), presbyopia

General General

Automated ICD coding for coronary heart diseases by a deep learning method.

In Heliyon

Automated ICD coding via machine learning that focuses on some specific diseases has been a hot topic. As one of the leading causes of death, coronary heart diseases (CHD) have seldom been specifically studied by related research, probably due to lack of data concretely targeting at the diseases. Based on Fuwai-CHD and MIMIC-III-CHD, which are a private dataset from Fuwai Hospital and the CHD-related subset of a public dataset named MIMIC-III respectively, this study aimed at automated CHD coding by a deep learning method, which mainly consists of three modules. The first is a B ERT variant module responsible for encoding clinical text. In the module, we fine-tuned BERT variants with masked language model on clinical text, and proposed a truncation method to tackle the problem that BERT variants generally cannot handle sequences containing more than 512 tokens. The second is a word2vec module for encoding code titles and the third is a label-attention module for integrating the embeddings of clinical text and code titles. In short, we named the method BW_att. We compared BW_att against some widely studied baselines, and found that BW_att performed best in most of the coding missions. Specifically, BW_att reached a Macro-F1 of 96.2% and a Macro-AUC of 98.9% for the top-100 most frequent codes in Fuwai-CHD, which covered 89.2% of the total code occurrences. When predicting the top-50 most frequent codes in MIMIC-III-CHD, BW_att reached a Macro-F1 of 40.5% and a Macro-AUC of 66.1%. Moreover, BW_att was capable of locating informative tokens from clinical text for predicting the target codes. In summary, BW_att can not only suggest CHD codes accurately, but also possess robust interpretability, hence has great potential in facilitating CHD coding in practice.

Zhao Shuai, Diao Xiaolin, Xia Yun, Huo Yanni, Cui Meng, Wang Yuxin, Yuan Jing, Zhao Wei

2023-Mar

BERT, Coronary heart diseases, Deep learning, ICD coding, Interpretability

Surgery Surgery

Interactions between silica and titanium nanoparticles and oral and gastrointestinal epithelia: Consequences for inflammatory diseases and cancer.

In Heliyon

Engineered nanoparticles (NPs) composed of elements such as silica and titanium, smaller than 100 nm in diameter and their aggregates, are found in consumer products such as cosmetics, food, antimicrobials and drug delivery systems, and oral health products such as toothpaste and dental materials. They may also interact accidently with epithelial tissues in the intestines and oral cavity, where they can aggregate into larger particles and induce inflammation through pathways such as inflammasome activation. Persistent inflammation can lead to precancerous lesions. Both the particles and lesions are difficult to detect in biopsies, especially in clinical settings that screen large numbers of patients. As diagnosis of early stages of disease can be lifesaving, there is growing interest in better understanding interactions between NPs and epithelium and developing rapid imaging techniques that could detect foreign particles and markers of inflammation in epithelial tissues. NPs can be labelled with fluorescence or radioactive isotopes, but it is challenging to detect unlabeled NPs with conventional imaging techniques. Different current imaging techniques such as synchrotron radiation X-ray fluorescence spectroscopy are discussed here. Improvements in imaging techniques, coupled with the use of machine learning tools, are needed before diagnosis of particles in biopsies by automated imaging could move usefully into the clinic.

Coutinho Almeida-da-Silva Cássio Luiz, Cabido Leticia Ferreira, Chin Wei-Chun, Wang Ge, Ojcius David M, Li Changqing

2023-Mar

Cancer, Cytotoxicity, DAMP, damage-assocaited molecular pattern, Epithelium, FBG, foreign body gingivitis, Imaging, Inflammasome, Inflammation, NP, nanoparticle, Nanoparticles, PAMP, pathogen-assocaited molecular pattern, ROS, reactive oxygen species

General General

COVID-19 and pneumonia diagnosis from chest X-ray images using convolutional neural networks.

In Network modeling and analysis in health informatics and bioinformatics

X-ray is a useful imaging modality widely utilized for diagnosing COVID-19 virus that infected a high number of people all around the world. The manual examination of these X-ray images may cause problems especially when there is lack of medical staff. Usage of deep learning models is known to be helpful for automated diagnosis of COVID-19 from the X-ray images. However, the widely used convolutional neural network architectures typically have many layers causing them to be computationally expensive. To address these problems, this study aims to design a lightweight differential diagnosis model based on convolutional neural networks. The proposed model is designed to classify the X-ray images belonging to one of the four classes that are Healthy, COVID-19, viral pneumonia, and bacterial pneumonia. To evaluate the model performance, accuracy, precision, recall, and F1-Score were calculated. The performance of the proposed model was compared with those obtained by applying transfer learning to the widely used convolutional neural network models. The results showed that the proposed model with low number of computational layers outperforms the pre-trained benchmark models, achieving an accuracy value of 89.89% while the best pre-trained model (Efficient-Net B2) achieved accuracy of 85.7%. In conclusion, the proposed lightweight model achieved the best overall result in classifying lung diseases allowing it to be used on devices with limited computational power. On the other hand, all the models showed a poor precision on viral pneumonia class and confusion in distinguishing it from bacterial pneumonia class, thus a decrease in the overall accuracy.

Hariri Muhab, Avşar Ercan

2023

COVID-19, Classification, Convolutional neural networks, Deep learning, Lung diseases, Transfer learning

General General

An accessible and versatile deep learning-based sleep stage classifier.

In Frontiers in neuroinformatics

Manual sleep scoring for research purposes and for the diagnosis of sleep disorders is labor-intensive and often varies significantly between scorers, which has motivated many attempts to design automatic sleep stage classifiers. With the recent introduction of large, publicly available hand-scored polysomnographic data, and concomitant advances in machine learning methods to solve complex classification problems with supervised learning, the problem has received new attention, and a number of new classifiers that provide excellent accuracy. Most of these however have non-trivial barriers to use. We introduce the Greifswald Sleep Stage Classifier (GSSC), which is free, open source, and can be relatively easily installed and used on any moderately powered computer. In addition, the GSSC has been trained to perform well on a large variety of electrode set-ups, allowing high performance sleep staging with portable systems. The GSSC can also be readily integrated into brain-computer interfaces for real-time inference. These innovations were achieved while simultaneously reaching a level of accuracy equal to, or exceeding, recent state of the art classifiers and human experts, making the GSSC an excellent choice for researchers in need of reliable, automatic sleep staging.

Hanna Jevri, Flöel Agnes

2023

EEG, classification, deep learning, machine learning, sleep