In Studies in health technology and informatics ; h5-index 23.0
The explosion of interest in exploiting machine learning techniques in healthcare has brought the issue of inferring causation from observational data to centre stage. In our work in supporting the health decisions of the individual person/patient-as-person at the point of care, we cannot avoid making decisions about which options are to be included or excluded in a decision support tool. Should the researcher's routine injunction to use their findings 'with caution', because of methodological limitations, lead to inclusion or exclusion? The task is one of deciding, first on causal plausibility, and then on causality. Like all decisions these are both sensitive to error preferences (trade-offs). We engage selectively with the Artificial Intelligence (AI) literature on the causality challenge and on the closely associated issue of the 'explainability' now demanded of 'black box' AI. Our commitment to embracing 'lifestyle' as well as 'medical' options for the individual person, leads us to highlight the key issue as that of who is to make the preference- sensitive decisions on causal plausibility and causality.
Rajput Vije Kumar, Kaltoft Mette Kjer, Dowie Jack
2022-Nov-03
Artificial Intelligence (AI), black box, causability, causality, clinical decision support, explainability, individualisation, machine learning, personalization, unsupervised