Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Journal of biomedical informatics ; h5-index 55.0

Despite the creation of thousands of machine learning (ML) models, the promise of improving patient care with ML remains largely unrealized. Adoption into clinical practice is lagging, in large part due to disconnects between how ML practitioners evaluate models and what is required for their successful integration into care delivery. Models are just one component of care delivery workflows whose constraints determine clinicians' abilities to act on models' outputs. However, methods to evaluate the usefulness of models in the context of their corresponding workflows are currently limited. To bridge this gap we developed APLUS, a reusable framework for quantitatively assessing via simulation the utility gained from integrating a model into a clinical workflow. We describe the APLUS simulation engine and workflow specification language, and apply it to evaluate a novel ML-based screening pathway for detecting peripheral artery disease at Stanford Health Care.

Wornow Michael, Gyang Ross Elsie, Callahan Alison, Shah Nigam H

2023-Feb-13

Clinical workflows, Discrete-event simulation, Machine learning, Model deployment, Usefulness assessment, Utility