Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Public Health Public Health

Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios.

In Journal of medical systems ; h5-index 48.0

This paper aims to highlight the potential applications and limits of a large language model (LLM) in healthcare. ChatGPT is a recently developed LLM that was trained on a massive dataset of text for dialogue with users. Although AI-based language models like ChatGPT have demonstrated impressive capabilities, it is uncertain how well they will perform in real-world scenarios, particularly in fields such as medicine where high-level and complex thinking is necessary. Furthermore, while the use of ChatGPT in writing scientific articles and other scientific outputs may have potential benefits, important ethical concerns must also be addressed. Consequently, we investigated the feasibility of ChatGPT in clinical and research scenarios: (1) support of the clinical practice, (2) scientific production, (3) misuse in medicine and research, and (4) reasoning about public health topics. Results indicated that it is important to recognize and promote education on the appropriate use and potential pitfalls of AI-based LLMs in medicine.

Cascella Marco, Montomoli Jonathan, Bellini Valentina, Bignami Elena

2023-Mar-04

Artificial intelligence, ChatGPT, Clinical resaerch, Medicine

Public Health Public Health

Should substitution monotherapy or combination therapy be used after failure of the first antiseizure medication? Observations from a 30-year cohort study.

In Epilepsia

OBJECTIVES : To assess the temporal trends in the use of second antiseizure (ASM) regimens and compare the efficacy of substitution monotherapy and combination therapy after failure of initial monotherapy in people with epilepsy.

METHODS : This was a longitudinal observational cohort study conducted at the Epilepsy Unit of the Western Infirmary in Glasgow, Scotland. We included patients who were newly treated for epilepsy with ASMs between July 1982, and October 2012. All patients were followed up for a minimum of 2 years. Seizure freedom was defined as no seizure for at least 1 year on unchanged medication at the last follow up.

RESULTS : During the study period, 498 patients were treated with a second ASM regimen after failure of the initial ASM monotherapy, of whom 346 (69%) were prescribed combination therapy and 152 (31%) were given substitution monotherapy. The proportion of patients receiving second regimen as combination therapy increased during the study period from 46% in first epoch (1985 to 1994) to 78% in the last (2005 to 2015) (RR=1.66, 95% CI: 1.17-2.36, corrected-p=0.010). Overall, 21% (104/498) of the patients achieved seizure freedom on the second ASM regimen, which was less than half of the seizure-free rate on the initial ASM monotherapy (45%, p<0.001). Patients who received substitution monotherapy had similar seizure-free rate compared with those who received combination therapy (RR=1.17, 95% CI: 0.81-1.69, p=0.41). Individual ASMs used, either alone or in combination, had similar efficacy. However, the subgroup analysis was limited by small sample sizes.

SIGNIFICANCE : The choice of second regimen used based on clinical judgement was not associated with treatment outcome in patients whose initial monotherapy failed due to poor seizure control. Alternative approaches such as machine learning should be explored to aid individualized selection of the second ASM regimen.

Hakeem Haris, Alsfouk Bshra Ali A, Kwan Patrick, Brodie Martin J, Chen Zhibin

2023-Mar-04

add-on therapy, antiseizure medication, efficacy, second regimen

General General

Artificial intelligence in endoscopic imaging for detection of malignant biliary strictures and cholangiocarcinoma: a systematic review.

In Annals of gastroenterology

BACKGROUND : Artificial intelligence (AI), when applied to computer vision using a convolutional neural network (CNN), is a promising tool in "difficult-to-diagnose" conditions such as malignant biliary strictures and cholangiocarcinoma (CCA). The aim of this systematic review is to summarize and review the available data on the diagnostic utility of endoscopic AI-based imaging for malignant biliary strictures and CCA.

METHODS : In this systematic review, PubMed, Scopus and Web of Science databases were reviewed for studies published from January 2000 to June 2022. Extracted data included type of endoscopic imaging modality, AI classifiers, and performance measures.

RESULTS : The search yielded 5 studies involving 1465 patients. Of the 5 included studies, 4 (n=934; 3,775,819 images) used CNN in combination with cholangioscopy, while one study (n=531; 13,210 images) used CNN with endoscopic ultrasound (EUS). The average image processing speed of CNN with cholangioscopy was 7-15 msec per frame while that of CNN with EUS was 200-300 msec per frame. The highest performance metrics were observed with CNN-cholangioscopy (accuracy 94.9%, sensitivity 94.7%, and specificity 92.1%). CNN-EUS was associated with the greatest clinical performance application, providing station recognition and bile duct segmentation; thus reducing procedure length and providing real-time feedback to the endoscopist.

CONCLUSIONS : Our results suggest that there is increasing evidence to support a role for AI in the diagnosis of malignant biliary strictures and CCA. CNN-based machine leaning of cholangioscopy images appears to be the most promising, while CNN-EUS has the best clinical performance application.

Njei Basile, McCarty Thomas R, Mohan Babu P, Fozo Lydia, Navaneethan Udayakumar

2023

Artificial intelligence, cholangiocarcinoma, cholangioscopy, endoscopic ultrasound, malignant biliary strictures

General General

Structural causal model with expert augmented knowledge to estimate the effect of oxygen therapy on mortality in the ICU.

In Artificial intelligence in medicine ; h5-index 34.0

Recent advances in causal inference techniques, more specifically, in the theory of structural causal models, provide the framework for identifying causal effects from observational data in cases where the causal graph is identifiable, i.e., the data generation mechanism can be recovered from the joint distribution. However, no such studies have been performed to demonstrate this concept with a clinical example. We present a complete framework to estimate the causal effects from observational data by augmenting expert knowledge in the model development phase and with a practical clinical application. Our clinical application entails a timely and essential research question, the effect of oxygen therapy intervention in the intensive care unit (ICU). The result of this project is helpful in a variety of disease conditions, including severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) patients in the ICU. We used data from the MIMIC-III database, a widely used health care database in the machine learning community with 58,976 admissions from an ICU in Boston, MA, to estimate the oxygen therapy effect on morality. We also identified the model's covariate-specific effect on oxygen therapy for more personalized intervention.

Gani Md Osman, Kethireddy Shravan, Adib Riddhiman, Hasan Uzma, Griffin Paul, Adibuzzaman Mohammad

2023-Mar

Causal inference, Critical care, Expert augmented knowledge, Oxygen therapy, Structural causal model

General General

Informing clinical assessment by contextualizing post-hoc explanations of risk prediction models in type-2 diabetes.

In Artificial intelligence in medicine ; h5-index 34.0

Medical experts may use Artificial Intelligence (AI) systems with greater trust if these are supported by 'contextual explanations' that let the practitioner connect system inferences to their context of use. However, their importance in improving model usage and understanding has not been extensively studied. Hence, we consider a comorbidity risk prediction scenario and focus on contexts regarding the patients' clinical state, AI predictions about their risk of complications, and algorithmic explanations supporting the predictions. We explore how relevant information for such dimensions can be extracted from Medical guidelines to answer typical questions from clinical practitioners. We identify this as a question answering (QA) task and employ several state-of-the-art Large Language Models (LLM) to present contexts around risk prediction model inferences and evaluate their acceptability. Finally, we study the benefits of contextual explanations by building an end-to-end AI pipeline including data cohorting, AI risk modeling, post-hoc model explanations, and prototyped a visual dashboard to present the combined insights from different context dimensions and data sources, while predicting and identifying the drivers of risk of Chronic Kidney Disease (CKD) - a common type-2 diabetes (T2DM) comorbidity. All of these steps were performed in deep engagement with medical experts, including a final evaluation of the dashboard results by an expert medical panel. We show that LLMs, in particular BERT and SciBERT, can be readily deployed to extract some relevant explanations to support clinical usage. To understand the value-add of the contextual explanations, the expert panel evaluated these regarding actionable insights in the relevant clinical setting. Overall, our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case. Our findings can help improve clinicians' usage of AI models.

Chari Shruthi, Acharya Prasant, Gruen Daniel M, Zhang Olivia, Eyigoz Elif K, Ghalwash Mohamed, Seneviratne Oshani, Saiz Fernando Suarez, Meyer Pablo, Chakraborty Prithwish, McGuinness Deborah L

2023-Mar

Clinical explainability, Contextual explanations, Question-answering approach, Type-2 diabetes comorbidity risk prediction, User-driven

General General

XAIRE: An ensemble-based methodology for determining the relative importance of variables in regression tasks. Application to a hospital emergency department.

In Artificial intelligence in medicine ; h5-index 34.0

Nowadays it is increasingly important in many applications to understand how different factors influence a variable of interest in a predictive modeling process. This task becomes particularly important in the context of Explainable Artificial Intelligence. Knowing the relative impact of each variable on the output allows us to acquire more information about the problem and about the output provided by a model. This paper proposes a new methodology, XAIRE, that determines the relative importance of input variables in a prediction environment, considering multiple prediction models in order to increase generality and avoid bias inherent in a particular learning algorithm. Concretely, we present an ensemble-based methodology that promotes the aggregation of results from several prediction methods to obtain a relative importance ranking. Also, statistical tests are considered in the methodology in order to reveal significant differences between the relative importance of the predictor variables. As a case study, XAIRE is applied to the arrival of patients in a Hospital Emergency Department, which has resulted in one of the largest sets of different predictor variables in the literature. Results show the extracted knowledge related to the relative importance of the predictors involved in the case study.

Rivera A J, Muñoz J Cobo, Pérez-Goody M D, de San Pedro B Sáenz, Charte F, Elizondo D, Rodríguez C, Abolafia M L, Perea A, Del Jesus M J

2023-Mar

Explainable artificial intelligence, Hospital emergency department, Regression analysis, Relative importance of variables, Time series forecasting