ArXiv Preprint
Neural models, with their ability to provide novel representations, have
shown promising results in prediction tasks in healthcare. However, patient
demographics, medical technology, and quality of care change over time. This
often leads to drop in the performance of neural models for prospective
patients, especially in terms of their calibration. The deep kernel learning
(DKL) framework may be robust to such changes as it combines neural models with
Gaussian processes, which are aware of prediction uncertainty. Our hypothesis
is that out-of-distribution test points will result in probabilities closer to
the global mean and hence prevent overconfident predictions. This in turn, we
hypothesise, will result in better calibration on prospective data.
This paper investigates DKL's behaviour when facing a temporal shift, which
was naturally introduced when an information system that feeds a cohort
database was changed. We compare DKL's performance to that of a neural baseline
based on recurrent neural networks. We show that DKL indeed produced superior
calibrated predictions. We also confirm that the DKL's predictions were indeed
less sharp. In addition, DKL's discrimination ability was even improved: its
AUC was 0.746 (+- 0.014 std), compared to 0.739 (+- 0.028 std) for the
baseline. The paper demonstrated the importance of including uncertainty in
neural computing, especially for their prospective use.
Miguel Rios, Ameen Abu-Hanna
2022-12-01