In Journal of healthcare informatics research
The recent advances in artificial intelligence have led to the rapid development of computer-aided skin cancer diagnosis applications that perform on par with dermatologists. However, the black-box nature of such applications makes it difficult for physicians to trust the predicted decisions, subsequently preventing the proliferation of such applications in the clinical workflow. In this work, we aim to address this challenge by developing an interpretable skin cancer diagnosis approach using clinical images. Accordingly, a skin cancer diagnosis model consolidated with two interpretability methods is developed. The first interpretability method integrates skin cancer diagnosis domain knowledge, characterized by a skin lesion taxonomy, into model development, whereas the other method focuses on visualizing the decision-making process by highlighting the dominant of interest regions of skin lesion images. The proposed model is trained and validated on clinical images since the latter are easily obtainable by non-specialist healthcare providers. The results demonstrate the effectiveness of incorporating lesion taxonomy in improving model classification accuracy, where our model can predict the skin lesion origin as melanocytic or non-melanocytic with an accuracy of 87%, predict lesion malignancy with 77% accuracy, and provide disease diagnosis with an accuracy of 71%. In addition, the implemented interpretability methods assist understand the model's decision-making process and detecting misdiagnoses. This work is a step toward achieving interpretability in skin cancer diagnosis using clinical images. The developed approach can assist general practitioners to make an early diagnosis, thus reducing the redundant referrals that expert dermatologists receive for further investigations.
Rezk Eman, Eltorki Mohamed, El-Dakhakhni Wael
2023-Mar
Artificial intelligence, Clinical images, Domain knowledge, Interpretability, Skin cancer, Skin lesion taxonomy