Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In MethodsX

Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. • Gradient-based explanation methods - such as Guided BackProp, DeepLift - utilize the gradient signal to estimate the feature importance for model prediction. • Perturbation-based methods - such as occlusion, LIME, kernel SHAP - utilize the input-output sampling pairs to estimate the feature importance. • We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.

Jin Weina, Li Xiaoxiao, Fatehi Mostafa, Hamarneh Ghassan

2023

Explainable artificial intelligence, Interpretable machine learning, Medical image analysis, Multi-modal medical image, Post-hoc explanation, Post-hoc feature attribution map explanation methods