Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Artificial intelligence in medicine ; h5-index 34.0

In recent years, machine learning methods have been rapidly adopted in the medical domain. However, current state-of-the-art medical mining methods usually produce opaque, black-box models. To address the lack of model transparency, substantial attention has been given to developing interpretable machine learning models. In the medical domain, counterfactuals can provide example-based explanations for predictions, and show practitioners the modifications required to change a prediction from an undesired to a desired state. In this paper, we propose a counterfactual solution MedSeqCF for preventing the mortality of three cohorts of ICU patients, by representing their electronic health records as medical event sequences, and generating counterfactuals by adopting and employing a text style-transfer technique. We propose three model augmentations for MedSeqCF to integrate additional medical knowledge for generating more trustworthy counterfactuals. Experimental results on the MIMIC-III dataset strongly suggest that augmented style-transfer methods can be effectively adapted for the problem of counterfactual explanations in healthcare applications and can further improve the model performance in terms of validity, BLEU-4, local outlier factor, and edit distance. In addition, our qualitative analysis of the results by consultation with medical experts suggests that our style-transfer solutions can generate clinically relevant and actionable counterfactual explanations.

Wang Zhendong, Samsten Isak, Kougia Vasiliki, Papapetrou Panagiotis

2023-Jan

Counterfactual explanations, Deep learning, Explainable models, Mortality prediction