ArXiv Preprint
The automation of the medical evidence acquisition and diagnosis process has
recently attracted increasing attention in order to reduce the workload of
doctors and democratize access to medical care. However, most works proposed in
the machine learning literature focus solely on improving the prediction
accuracy of a patient's pathology. We argue that this objective is insufficient
to ensure doctors' acceptability of such systems. In their initial interaction
with patients, doctors do not only focus on identifying the pathology a patient
is suffering from; they instead generate a differential diagnosis (in the form
of a short list of plausible diseases) because the medical evidence collected
from patients is often insufficient to establish a final diagnosis. Moreover,
doctors explicitly explore severe pathologies before potentially ruling them
out from the differential, especially in acute care settings. Finally, for
doctors to trust a system's recommendations, they need to understand how the
gathered evidences led to the predicted diseases. In particular, interactions
between a system and a patient need to emulate the reasoning of doctors. We
therefore propose to model the evidence acquisition and automatic diagnosis
tasks using a deep reinforcement learning framework that considers three
essential aspects of a doctor's reasoning, namely generating a differential
diagnosis using an exploration-confirmation approach while prioritizing severe
pathologies. We propose metrics for evaluating interaction quality based on
these three aspects. We show that our approach performs better than existing
models while maintaining competitive pathology prediction accuracy.
Arsene Fansi Tchango, Rishab Goel, Julien Martel, Zhi Wen, Gaetan Marceau Caron, Joumana Ghosn
2022-10-13