Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Public Health Public Health

A Machine Learning Prediction Model for Immediate Graft Function After Deceased Donor Kidney Transplantation.

In Transplantation ; h5-index 56.0

BACKGROUND : After kidney transplantation (KTx), the graft can evolve from excellent immediate graft function (IGF) to total absence of function requiring dialysis. Recipients with IGF do not seem to benefit from using machine perfusion, an expensive procedure, in the long term when compared with cold storage. This study proposes to develop a prediction model for IGF in KTx deceased donor patients using machine learning algorithms.

METHODS : Unsensitized recipients who received their first KTx deceased donor between January 1, 2010, and December 31, 2019, were classified according to the conduct of renal function after transplantation. Variables related to the donor, recipient, kidney preservation, and immunology were used. The patients were randomly divided into 2 groups: 70% were assigned to the training and 30% to the test group. Popular machine learning algorithms were used: eXtreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine, Gradient Boosting classifier, Logistic Regression, CatBoost classifier, AdaBoost classifier, and Random Forest classifier. Comparative performance analysis on the test dataset was performed using the results of the AUC values, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score.

RESULTS : Of the 859 patients, 21.7% (n = 186) had IGF. The best predictive performance resulted from the eXtreme Gradient Boosting model (AUC, 0.78; 95% CI, 0.71-0.84; sensitivity, 0.64; specificity, 0.78). Five variables with the highest predictive value were identified.

CONCLUSIONS : Our results indicated the possibility of creating a model for the prediction of IGF, enhancing the selection of patients who would benefit from an expensive treatment, as in the case of machine perfusion preservation.

Quinino Raquel M, Agena Fabiana, Modelli de Andrade Luis Gustavo, Furtado Mariane, Chiavegatto Filho Alexandre D P, David-Neto Elias

2023-Mar-06

General General

Coronavirus diagnosis using cough sounds: Artificial intelligence approaches.

In Frontiers in artificial intelligence

INTRODUCTION : The Coronavirus disease 2019 (COVID-19) pandemic has caused irreparable damage to the world. In order to prevent the spread of pathogenicity, it is necessary to identify infected people for quarantine and treatment. The use of artificial intelligence and data mining approaches can lead to prevention and reduction of treatment costs. The purpose of this study is to create data mining models in order to diagnose people with the disease of COVID-19 through the sound of coughing.

METHOD : In this research, Supervised Learning classification algorithms have been used, which include Support Vector Machine (SVM), random forest, and Artificial Neural Networks, that based on the standard "Fully Connected" neural network, Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) recurrent neural networks have been established. The data used in this research was from the online site sorfeh.com/sendcough/en, which has data collected during the spread of COVID-19.

RESULT : With the data we have collected (about 40,000 people) in different networks, we have reached acceptable accuracies.

CONCLUSION : These findings show the reliability of this method for using and developing a tool as a screening and early diagnosis of people with COVID-19. This method can also be used with simple artificial intelligence networks so that acceptable results can be expected. Based on the findings, the average accuracy was 83% and the best model was 95%.

Askari Nasab Kazem, Mirzaei Jamal, Zali Alireza, Gholizadeh Sarfenaz, Akhlaghdoust Meisam

2023

artificial intelligence, coronavirus, cough, deep learning, machine learning, respiratory sounds

Pathology Pathology

Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI

ArXiv Preprint

The use of supervised deep learning techniques to detect pathologies in brain MRI scans can be challenging due to the diversity of brain anatomy and the need for annotated data sets. An alternative approach is to use unsupervised anomaly detection, which only requires sample-level labels of healthy brains to create a reference representation. This reference representation can then be compared to unhealthy brain anatomy in a pixel-wise manner to identify abnormalities. To accomplish this, generative models are needed to create anatomically consistent MRI scans of healthy brains. While recent diffusion models have shown promise in this task, accurately generating the complex structure of the human brain remains a challenge. In this paper, we propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy, using spatial context to guide and improve reconstruction. We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.

Finn Behrendt, Debayan Bhattacharya, Julia Krüger, Roland Opfer, Alexander Schlaefer

2023-03-07

General General

Application of deep learning in recognition of accrued earnings management.

In Heliyon

We choose the sample data in Chinese capital market to compare the measurement effect of earnings management with Deep Belief Network, Deep Convolution Generative Adversarial Network, Generalized Regression Neural Network and modified Jones model by performance. We find that Deep Belief Network has the best effect, while Deep Convolution Generative Adversarial Network has no significant advantage, and the measurement effect of Generalized Regression Neural Network and modified Jones model have little difference. This paper provides empirical evidence that neural networks based on deep learning technology and other artificial intelligence technologies can be widely applied to measure earnings management in the future.

Li Jia, Sun Zhoutianyang

2023-Mar

Artificial intelligence, Deep belief network, Deep learning, Earnings management, Jones model

Surgery Surgery

Benchmarking performance of an automatic polysomnography scoring system in a population with suspected sleep disorders.

In Frontiers in neurology

AIM : The current gold standard for measuring sleep disorders is polysomnography (PSG), which is manually scored by a sleep technologist. Scoring a PSG is time-consuming and tedious, with substantial inter-rater variability. A deep-learning-based sleep analysis software module can perform autoscoring of PSG. The primary objective of the study is to validate the accuracy and reliability of the autoscoring software. The secondary objective is to measure workflow improvements in terms of time and cost via a time motion study.

METHODOLOGY : The performance of an automatic PSG scoring software was benchmarked against the performance of two independent sleep technologists on PSG data collected from patients with suspected sleep disorders. The technologists at the hospital clinic and a third-party scoring company scored the PSG records independently. The scores were then compared between the technologists and the automatic scoring system. An observational study was also performed where the time taken for sleep technologists at the hospital clinic to manually score PSGs was tracked, along with the time taken by the automatic scoring software to assess for potential time savings.

RESULTS : Pearson's correlation between the manually scored apnea-hypopnea index (AHI) and the automatically scored AHI was 0.962, demonstrating a near-perfect agreement. The autoscoring system demonstrated similar results in sleep staging. The agreement between automatic staging and manual scoring was higher in terms of accuracy and Cohen's kappa than the agreement between experts. The autoscoring system took an average of 42.7 s to score each record compared with 4,243 s for manual scoring. Following a manual review of the auto scores, an average time savings of 38.6 min per PSG was observed, amounting to 0.25 full-time equivalent (FTE) savings per year.

CONCLUSION : The findings indicate a potential for a reduction in the burden of manual scoring of PSGs by sleep technologists and may be of operational significance for sleep laboratories in the healthcare setting.

Choo Bryan Peide, Mok Yingjuan, Oh Hong Choon, Patanaik Amiya, Kishan Kishan, Awasthi Animesh, Biju Siddharth, Bhattacharjee Soumya, Poh Yvonne, Wong Hang Siang

2023

AI sleep scoring, automatic sleep scoring, machine learning, sleep staging, sleep-disordered breathing

General General

mRisk: Continuous Risk Estimation for Smoking Lapse from Noisy Sensor Data with Incomplete and Positive-Only Labels.

In Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies

Passive detection of risk factors (that may influence unhealthy or adverse behaviors) via wearable and mobile sensors has created new opportunities to improve the effectiveness of behavioral interventions. A key goal is to find opportune moments for intervention by passively detecting rising risk of an imminent adverse behavior. But, it has been difficult due to substantial noise in the data collected by sensors in the natural environment and a lack of reliable label assignment of low- and high-risk states to the continuous stream of sensor data. In this paper, we propose an event-based encoding of sensor data to reduce the effect of noises and then present an approach to efficiently model the historical influence of recent and past sensor-derived contexts on the likelihood of an adverse behavior. Next, to circumvent the lack of any confirmed negative labels (i.e., time periods with no high-risk moment), and only a few positive labels (i.e., detected adverse behavior), we propose a new loss function. We use 1,012 days of sensor and self-report data collected from 92 participants in a smoking cessation field study to train deep learning models to produce a continuous risk estimate for the likelihood of an impending smoking lapse. The risk dynamics produced by the model show that risk peaks an average of 44 minutes before a lapse. Simulations on field study data show that using our model can create intervention opportunities for 85% of lapses with 5.5 interventions per day.

Ullah Md Azim, Chatterjee Soujanya, Fagundes Christopher P, Lam Cho, Nahum-Shani Inbal, Rehg James M, Wetter David W, Kumar Santosh

2022-Sep

Behavioral Intervention, Human-centered computing, Risk prediction, Smoking Cessation, Ubiquitous and mobile computing design and evaluation methods, Wearable Sensors, mHealth