Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

[Artificial intelligence: An introduction for clinicians].

In Revue des maladies respiratoires

Artificial intelligence (AI) is a growing field that has the potential to transform many areas of society, including healthcare. For a physician, it is important to understand the basics of AI and its potential applications in medicine. AI refers to the development of computer systems capable of performing tasks that typically require human intelligence, such as pattern recognition, learning from data, and decision-making. This technology can be used to analyze large amounts of patient data and to identify trends and patterns that can be difficult for human physicians to detect. This can help doctors to manage their workload more efficiently and provide better care for their patients. All in all, AI has the potential to dramatically improve the practice of medicine and improve patient outcomes. In this work, the definition and the key principles of AI are outlined, with particular focus on the field of machine learning, which has been undergoing considerable development in medicine, providing clinicians with in-depth understanding of the principles underlying the new technologies ensuring improved health care.

Briganti G

2023-Mar-07

Apprentissage machine, Apprentissage profond, Data science, Innovation, Medical informatics, Science des données, Statistics, Statistique

Pathology Pathology

Artificial intelligence in clinical multiparameter flow cytometry and mass cytometry-key tools and progress.

In Seminars in diagnostic pathology

There are many research studies and emerging tools using artificial intelligence (AI) and machine learning to augment flow and mass cytometry workflows. Emerging AI tools can quickly identify common cell populations with continuous improvement of accuracy, uncover patterns in high-dimensional cytometric data that are undetectable by human analysis, facilitate the discovery of cell subpopulations, perform semi-automated immune cell profiling, and demonstrate potential to automate aspects of clinical multiparameter flow cytometric (MFC) diagnostic workflow. Utilizing AI in the analysis of cytometry samples can reduce subjective variability and assist in breakthroughs in understanding diseases. Here we review the diverse types of AI that are being applied to clinical cytometry data and how AI is driving advances in data analysis to improve diagnostic sensitivity and accuracy. We review supervised and unsupervised clustering algorithms for cell population identification, various dimensionality reduction techniques, and their utilities in visualization and machine learning pipelines, and supervised learning approaches for classifying entire cytometry samples.Understanding the AI landscape will enable pathologists to better utilize open source and commercially available tools, plan exploratory research projects to characterize diseases, and work with machine learning and data scientists to implement clinical data analysis pipelines.

Fuda Franklin, Chen Mingyi, Chen Weina, Cox Andrew

2023-Mar-05

Artificial intelligence, Flow cytometry, Machine learning, Mass cytometry, The Hematology laboratory

General General

APOC1 as a novel diagnostic biomarker for DN based on machine learning algorithms and experiment.

In Frontiers in endocrinology ; h5-index 55.0

INTRODUCTION : Diabetic nephropathy is the leading cause of end-stage renal disease, which imposes a huge economic burden on individuals and society, but effective and reliable diagnostic markers are still not available.

METHODS : Differentially expressed genes (DEGs) were characterized and functional enrichment analysis was performed in DN patients. Meanwhile, a weighted gene co-expression network (WGCNA) was also constructed. For further, algorithms Lasso and SVM-RFE were applied to screening the DN core secreted genes. Lastly, WB, IHC, IF, and Elias experiments were applied to demonstrate the hub gene expression in DN, and the research results were confirmed in mouse models and clinical specimens.

RESULTS : 17 hub secretion genes were identified in this research by analyzing the DEGs, the important module genes in WGCNA, and the secretion genes. 6 hub secretory genes (APOC1, CCL21, INHBA, RNASE6, TGFBI, VEGFC) were obtained by Lasso and SVM-RFE algorithms. APOC1 was discovered to exhibit elevated expression in renal tissue of a DN mouse model, and APOC1 is probably a core secretory gene in DN. Clinical data demonstrate that APOC1 expression is associated significantly with proteinuria and GFR in DN patients. APOC1 expression in the serum of DN patients was 1.358±0.1292μg/ml, compared to 0.3683±0.08119μg/ml in the healthy population. APOC1 was significantly elevated in the sera of DN patients and the difference was statistical significant (P > 0.001). The ROC curve of APOC1 in DN gave an AUC = 92.5%, sensitivity = 95%, and specificity = 97% (P < 0.001).

CONCLUSIONS : Our research indicates that APOC1 might be a novel diagnostic biomarker for diabetic nephropathy for the first time and suggest that APOC1 may be available as a candidate intervention target for DN.

Yu Kuipeng, Li Shan, Wang Chunjie, Zhang Yimeng, Li Luyao, Fan Xin, Fang Lin, Li Haiyun, Yang Huimin, Sun Jintang, Yang Xiangdong

2023

APOC1, DN, biomarker, diagnostic, machine learning algorithms

General General

Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction.

In International journal of medical informatics ; h5-index 49.0

BACKGROUND : Data drift can negatively impact the performance of machine learning algorithms (MLAs) that were trained on historical data. As such, MLAs should be continuously monitored and tuned to overcome the systematic changes that occur in the distribution of data. In this paper, we study the extent of data drift and provide insights about its characteristics for sepsis onset prediction. This study will help elucidate the nature of data drift for prediction of sepsis and similar diseases. This may aid with the development of more effective patient monitoring systems that can stratify risk for dynamic disease states in hospitals.

METHODS : We devise a series of simulations that measure the effects of data drift in patients with sepsis, using electronic health records (EHR). We simulate multiple scenarios in which data drift may occur, namely the change in the distribution of the predictor variables (covariate shift), the change in the statistical relationship between the predictors and the target (concept shift), and the occurrence of a major healthcare event (major event) such as the COVID-19 pandemic. We measure the impact of data drift on model performances, identify the circumstances that necessitate model retraining, and compare the effects of different retraining methodologies and model architecture on the outcomes. We present the results for two different MLAs, eXtreme Gradient Boosting (XGB) and Recurrent Neural Network (RNN).

RESULTS : Our results show that the properly retrained XGB models outperform the baseline models in all simulation scenarios, hence signifying the existence of data drift. In the major event scenario, the area under the receiver operating characteristic curve (AUROC) at the end of the simulation period is 0.811 for the baseline XGB model and 0.868 for the retrained XGB model. In the covariate shift scenario, the AUROC at the end of the simulation period for the baseline and retrained XGB models is 0.853 and 0.874 respectively. In the concept shift scenario and under the mixed labeling method, the retrained XGB models perform worse than the baseline model for most simulation steps. However, under the full relabeling method, the AUROC at the end of the simulation period for the baseline and retrained XGB models is 0.852 and 0.877 respectively. The results for the RNN models were mixed, suggesting that retraining based on a fixed network architecture may be inadequate for an RNN. We also present the results in the form of other performance metrics such as the ratio of observed to expected probabilities (calibration) and the normalized rate of positive predictive values (PPV) by prevalence, referred to as lift, at a sensitivity of 0.8.

CONCLUSION : Our simulations reveal that retraining periods of a couple of months or using several thousand patients are likely to be adequate to monitor machine learning models that predict sepsis. This indicates that a machine learning system for sepsis prediction will probably need less infrastructure for performance monitoring and retraining compared to other applications in which data drift is more frequent and continuous. Our results also show that in the event of a concept shift, a full overhaul of the sepsis prediction model may be necessary because it indicates a discrete change in the definition of sepsis labels, and mixing the labels for the sake of incremental training may not produce the desired results.

Rahmani Keyvan, Thapa Rahul, Tsou Peiling, Casie Chetty Satish, Barnes Gina, Lam Carson, Foon Tso Chak

2022-Nov-19

Clinical decision support, Data drift, Machine learning, Sepsis

General General

Predicting Corrosion Damage in the Human Body Using Artificial Intelligence: In Vitro Progress and Future Applications.

In The Orthopedic clinics of North America

Artificial intelligence (AI) is used in the clinic to improve patient care. While the successes illustrate AI's impact, few studies have led to improved clinical outcomes. In this review, we focus on how AI models implemented in nonorthopedic fields of corrosion science may apply to the study of orthopedic alloys. We first define and introduce fundamental AI concepts and models, as well as physiologically relevant corrosion damage modes. We then systematically review the corrosion/AI literature. Finally, we identify several AI models that may be implemented to study fretting, crevice, and pitting corrosion of titanium and cobalt chrome alloys.

Kurtz Michael A, Yang Ruoyu, Elapolu Mohan S R, Wessinger Audrey C, Nelson William, Alaniz Kazzandra, Rai Rahul, Gilbert Jeremy L

2023-Apr

Artificial intelligence, Corrosion, Fretting, Machine learning, Neural network, Orthopedic biomaterials, Pitting, Support vector machine, Systematic review

General General

Principles and Validations of an Artificial Intelligence-Based Recommender System Suggesting Acceptable Food Changes.

In The Journal of nutrition ; h5-index 61.0

BACKGROUND : Along with the popularity of smartphones, artificial intelligence-based personalized suggestions can be seen as promising ways to change eating habits toward more desirable diets.

OBJECTIVES : Two issues raised by such technologies were addressed in this study. The first hypothesis tested is a recommender system based on automatically learning simple association rules between dishes of the same meal that would make it possible to identify plausible substitutions for the consumer. The second hypothesis tested is that for an identical set of dietary-swaps suggestions, the more the user is-or thinks to be-involved in the process of identifying the suggestion, the higher is their probability of accepting the suggestion.

METHODS : Three studies are presented in this article, first, we present the principles of an algorithm to mine plausible substitutions from a large food consumption database. Second, we evaluate the plausibility of these automatically mined suggestions through the results of online tests conducted for a group of 255 adult participants. Afterward, we investigated the persuasiveness of 3 suggestion methods of such recommendations in a population of 27 healthy adult volunteers through a custom designed smartphone application.

RESULTS : The results firstly indicated that a method based on automatic learning of substitution rules between foods performed relatively well identifying plausible swaps suggestions. Regarding the form that should be used to suggest, we found that when users are involved in selecting the most appropriate recommendation for them, the resulting suggestions were more accepted (OR = 3.168; P < 0.0004).

CONCLUSIONS : This work indicates that food recommendation algorithms can gain efficiency by taking into account the consumption context and user engagement in the recommendation process. Further research is warranted to identify nutritionally relevant suggestions.

Vandeputte Jules, Herold Pierrick, Kuslii Mykyt, Viappiani Paolo, Muller Laurent, Martin Christine, Davidenko Olga, Delaere Fabien, Manfredotti Cristina, Cornuéjols Antoine, Darcel Nicolas

2023-Feb

artificial intelligence, behavior change, decision sciences, food recommendation algorithms, healthy diets