In International journal of medical informatics ; h5-index 49.0
BACKGROUND : Artificial Intelligence (AI) is increasingly being developed to support clinical decisions for better health service quality, but the adoption of AI in hospitals is not as popular as expected. A possible reason is that the unclear AI explainability (XAI) affects the physicians' consideration of adopting the model.
PURPOSE : To propose and validate an innovative conceptual model aimed at exploring physicians' intention to use AI with XAI as an antecedent variable of technology trust (TT) and perceived value (PV).
METHODS : A questionnaire survey was conducted to collect data from physicians of three hospitals in Taiwan. Structural equation modeling (SEM) was used to validate the proposed model and test the hypotheses.
RESULTS : A total of 295 valid questionnaires were collected. The research results showed that physicians expressed a high intention to use AI. The XAI was found to be of great importance and had a significant impact both on AI TT and PV. We also observed that TT in AI had a significant impact on PV. Moreover, physicians' PV and TT in AI had a significant impact on their behavioral intention to use AI (BI). However, XAI's impact on BI cannot be proved.
CONCLUSIONS : The conceptual model developed in this study provides empirical evidence that could be used as guidelines to effectively explore physicians' intention to use medical AI from the antecedent of XAI. Our findings contribute crucial AI-human interaction insights in health care studies.
Liu Chung-Feng, Chen Zhih-Cherng, Kuo Szu-Chen, Lin Tzu-Chi
AI explainability (XAI), Artificial intelligence (AI), Behavioral intention, Perceived value, Physician, Technology trust