ArXiv Preprint
Artificial intelligence (AI) continues to transform data analysis in many
domains. Progress in each domain is driven by a growing body of annotated data,
increased computational resources, and technological innovations. In medicine,
the sensitivity of the data, the complexity of the tasks, the potentially high
stakes, and a requirement of accountability give rise to a particular set of
challenges. In this review, we focus on three key methodological approaches
that address some of the particular challenges in AI-driven medical decision
making. (1) Explainable AI aims to produce a human-interpretable justification
for each output. Such models increase confidence if the results appear
plausible and match the clinicians expectations. However, the absence of a
plausible explanation does not imply an inaccurate model. Especially in highly
non-linear, complex models that are tuned to maximize accuracy, such
interpretable representations only reflect a small portion of the
justification. (2) Domain adaptation and transfer learning enable AI models to
be trained and applied across multiple domains. For example, a classification
task based on images acquired on different acquisition hardware. (3) Federated
learning enables learning large-scale models without exposing sensitive
personal health information. Unlike centralized AI learning, where the
centralized learning machine has access to the entire training data, the
federated learning process iteratively updates models across multiple sites by
exchanging only parameter updates, not personal health data. This narrative
review covers the basic concepts, highlights relevant corner-stone and
state-of-the-art research in the field, and discusses perspectives.
Ahmad Chaddad, Qizong lu, Jiali Li, Yousef Katib, Reem Kateb, Camel Tanougast, Ahmed Bouridane, Ahmed Abdulkadir
2022-11-17