Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Artificial intelligence in medicine ; h5-index 34.0

Despite advances in machine learning-based clinical prediction models, only few of such models are actually deployed in clinical contexts. Among other reasons, this is due to a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients initially developed on the MIMIC-III dataset when applied to an external cohort of an American research hospital. To help account for the performance differences observed, we utilized interpretability methods based on feature importance, which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. The knowledge gleaned upon derivation can be potentially useful to assist model update during validation for more generalizable and simpler models. We argue that interpretability methods should be considered by practitioners as a further tool to help explain performance differences and inform model update in validation studies.

da Cruz Harry Freitas, Pfahringer Boris, Martensen Tom, Schneider Frederic, Meyer Alexander, Böttinger Erwin, Schapranow Matthieu-P

2021-Jan

Clinical predictive modeling, Interpretability methods, Nephrology, Validation