ArXiv Preprint
Deep learning classifiers provide the most accurate means of automatically
diagnosing diabetic retinopathy (DR) based on optical coherence tomography
(OCT) and its angiography (OCTA). The power of these models is attributable in
part to the inclusion of hidden layers that provide the complexity required to
achieve a desired task. However, hidden layers also render algorithm outputs
difficult to interpret. Here we introduce a novel biomarker activation map
(BAM) framework based on generative adversarial learning that allows clinicians
to verify and understand classifiers decision-making. A data set including 456
macular scans were graded as non-referable or referable DR based on current
clinical standards. A DR classifier that was used to evaluate our BAM was first
trained based on this data set. The BAM generation framework was designed by
combing two U-shaped generators to provide meaningful interpretability to this
classifier. The main generator was trained to take referable scans as input and
produce an output that would be classified by the classifier as non-referable.
The BAM is then constructed as the difference image between the output and
input of the main generator. To ensure that the BAM only highlights
classifier-utilized biomarkers an assistant generator was trained to do the
opposite, producing scans that would be classified as referable by the
classifier from non-referable scans. The generated BAMs highlighted known
pathologic features including nonperfusion area and retinal fluid. A fully
interpretable classifier based on these highlights could help clinicians better
utilize and verify automated DR diagnosis.
Pengxiao Zang, Tristan T. Hormel, Jie Wang, Yukun Guo, Steven T. Bailey, Christina J. Flaxel, David Huang, Thomas S. Hwang, Yali Jia
2022-12-13