In Ophthalmology science
PURPOSE : Rare disease diagnosis is challenging in medical image-based artificial intelligence due to a natural class imbalance in datasets, leading to biased prediction models. Inherited retinal diseases (IRDs) are a research domain that particularly faces this issue. This study investigates the applicability of synthetic data in improving artificial intelligence-enabled diagnosis of IRDs using generative adversarial networks (GANs).
DESIGN : Diagnostic study of gene-labeled fundus autofluorescence (FAF) IRD images using deep learning.
PARTICIPANTS : Moorfields Eye Hospital (MEH) dataset of 15 692 FAF images obtained from 1800 patients with confirmed genetic diagnosis of 1 of 36 IRD genes.
METHODS : A StyleGAN2 model is trained on the IRD dataset to generate 512 × 512 resolution images. Convolutional neural networks are trained for classification using different synthetically augmented datasets, including real IRD images plus 1800 and 3600 synthetic images, and a fully rebalanced dataset. We also perform an experiment with only synthetic data. All models are compared against a baseline convolutional neural network trained only on real data.
MAIN OUTCOME MEASURES : We evaluated synthetic data quality using a Visual Turing Test conducted with 4 ophthalmologists from MEH. Synthetic and real images were compared using feature space visualization, similarity analysis to detect memorized images, and Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) score for no-reference-based quality evaluation. Convolutional neural network diagnostic performance was determined on a held-out test set using the area under the receiver operating characteristic curve (AUROC) and Cohen's Kappa (κ).
RESULTS : An average true recognition rate of 63% and fake recognition rate of 47% was obtained from the Visual Turing Test. Thus, a considerable proportion of the synthetic images were classified as real by clinical experts. Similarity analysis showed that the synthetic images were not copies of the real images, indicating that copied real images, meaning the GAN was able to generalize. However, BRISQUE score analysis indicated that synthetic images were of significantly lower quality overall than real images (P < 0.05). Comparing the rebalanced model (RB) with the baseline (R), no significant change in the average AUROC and κ was found (R-AUROC = 0.86[0.85-88], RB-AUROC = 0.88[0.86-0.89], R-k = 0.51[0.49-0.53], and RB-k = 0.52[0.50-0.54]). The synthetic data trained model (S) achieved similar performance as the baseline (S-AUROC = 0.86[0.85-87], S-k = 0.48[0.46-0.50]).
CONCLUSIONS : Synthetic generation of realistic IRD FAF images is feasible. Synthetic data augmentation does not deliver improvements in classification performance. However, synthetic data alone deliver a similar performance as real data, and hence may be useful as a proxy to real data. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references.
Veturi Yoga Advaith, Woof William, Lazebnik Teddy, Moghul Ismail, Woodward-Court Peter, Wagner Siegfried K, Cabral de GuimarĂ£es Thales Antonio, Daich Varela Malena, Liefers Bart, Patel Praveen J, Beck Stephan, Webster Andrew R, Mahroo Omar, Keane Pearse A, Michaelides Michel, Balaskas Konstantinos, Pontikos Nikolas
2023-Jun
AUROC, area under the receiver operating characteristic curve, BRISQUE, Blind/Referenceless Image Spatial Quality Evaluator, Class imbalance, Clinical Decision-Support Model, DL, deep learning, Deep Learning, FAF, fundas autofluorescence, FRR, Fake Recognition Rate, GAN, generative adversarial network, Generative Adversarial Networks, IRD, inherited retinal disease, Inherited Retinal Diseases, MEH, Moorfields Eye Hospital, R, baseline model, RB, rebalanced model, S, synthetic data trained model, Synthetic data, TRR, True Recognition Rate, UMAP, Universal Manifold Approximation and Projection