ArXiv Preprint
The shift from symbolic AI systems to black-box, sub-symbolic, and
statistical ones has motivated a rapid increase in the interest toward
explainable AI (XAI), i.e. approaches to make black-box AI systems explainable
to human decision makers with the aim of making these systems more acceptable
and more usable tools and supports. However, we make the point that, rather
than always making black boxes transparent, these approaches are at risk of
\emph{painting the black boxes white}, thus failing to provide a level of
transparency that would increase the system's usability and comprehensibility;
or, even, at risk of generating new errors, in what we termed the
\emph{white-box paradox}. To address these usability-related issues, in this
work we focus on the cognitive dimension of users' perception of explanations
and XAI systems. To this aim, we designed and conducted a questionnaire-based
experiment by which we involved 44 cardiology residents and specialists in an
AI-supported ECG reading task. In doing so, we investigated different research
questions concerning the relationship between users' characteristics (e.g.
expertise) and their perception of AI and XAI systems, including their trust,
the perceived explanations' quality and their tendency to defer the decision
process to automation (i.e. technology dominance), as well as the mutual
relationships among these different dimensions. Our findings provide a
contribution to the evaluation of AI-based support systems from a Human-AI
interaction-oriented perspective and lay the ground for further investigation
of XAI and its effects on decision making and user experience.
Federico Cabitza, Matteo Cameli, Andrea Campagner, Chiara Natali, Luca Ronzio
2022-10-27