Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Ophthalmology ; h5-index 90.0

OBJECTIVE : To illustrate what is inside the "black box" of deep learning models (DLMs) so that clinicians can have greater confidence in the conclusions of artificial intelligence by evaluating adversarial explanation on its ability to explain the rationale of deep learning model decisions for glaucoma and glaucoma related findings. Adversarial explanation generates adversarial examples (AEs), or images which have been changed to gain or lose pathology-specific traits, to explain the DLM's rationale.

DESIGN : Evaluation of explanation methods for DLMs.

PARTICIPANTS : Health screening participants (n=1,653) at the Seoul National University Hospital Health Promotion Center.

METHODS : We evaluated 6,430 retinal fundus images for referable glaucoma (RG), increased cup-to-disc ratio (ICDR), disc rim narrowing (DRN), and retinal nerve fiber layer defect (RNFLD), and trained DLMs for each diagnosis and findings. Surveys consisting of explanations using AE and gradient-weighted class activation mapping (GradCAM), a conventional heatmap-based explanation method, were generated for 400 pathologic and normal patient-eyes. For each method, board-trained glaucoma specialists rated the location explainability, the ability to pinpoint decision-relevant areas in the image, and rationale explainability, the ability to inform the user on the model's reasoning for the decision based on pathological features. Scores were compared by paired Wilcoxon signed-rank test.

MAIN OUTCOME : Area under the receiver operating characteristic curve (AUC), sensitivities, and specificities of DLMs. Visualization of clinical pathology changes of AEs. Survey scores for locational and rationale explainability.

RESULTS : The AUCs were 0.90, 0.99, 0.95, and 0.79, and sensitivities were 0.79, 1.00, 0.82, and 0.55 at 0.90 specificity, for RG, ICDR, DRN, and RNFLD DLMs, respectively. Generated AEs showed valid clinical feature changes, and survey results for location explainability was 3.94±1.33 and 2.55±1.24 using AEs and GradCAMs, respectively, out of a possible maximum score of 5 points. The score for rationale explainability was 3.97±1.31 and 2.10±1.25 for AE and GradCAM, respectively. AE provided better location and rationale explainability than GradCAM (p-value<0.001).

CONCLUSIONS : Adversarial explanation increased the explainability over GradCAM, a conventional heatmap-based explanation method. Adversarial explanations may help medical professionals more clearly understand the rationale of DLMs when using them for clinical decisions.

Chang Jooyoung, Lee Jinho, Ha Ahnul, Han Young Soo, Bak Eunoo, Choi Seulggie, Yun Jae Moon, Kang Uk, Shin Il Hyung, Shin Joo Young, Ko Taehoon, Bae Ye Seul, Oh Baek-Lok, Park Ki Ho, Park Sang Min

2020-Jun-26