Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Ophthalmology Ophthalmology

Microvasculature Segmentation and Intercapillary Area Quantification of the Deep Vascular Complex Using Transfer Learning.

In Translational vision science & technology

Purpose : Optical coherence tomography angiography (OCT-A) permits visualization of the changes to the retinal circulation due to diabetic retinopathy (DR), a microvascular complication of diabetes. We demonstrate accurate segmentation of the vascular morphology for the superficial capillary plexus (SCP) and deep vascular complex (DVC) using a convolutional neural network (CNN) for quantitative analysis.

Methods : The main CNN training dataset consisted of retinal OCT-A with a 6 × 6-mm field of view (FOV), acquired using a Zeiss PlexElite. Multiple-volume acquisition and averaging enhanced the vasculature contrast used for constructing the ground truth for neural network training. We used transfer learning from a CNN trained on smaller FOVs of the SCP acquired using different OCT instruments. Quantitative analysis of perfusion was performed on the resulting automated vasculature segmentations in representative patients with DR.

Results : The automated segmentations of the OCT-A images maintained the distinct morphologies of the SCP and DVC. The network segmented the SCP with an accuracy and Dice index of 0.8599 and 0.8618, respectively, and 0.7986 and 0.8139, respectively, for the DVC. The inter-rater comparisons for the SCP had an accuracy and Dice index of 0.8300 and 0.6700, respectively, and 0.6874 and 0.7416, respectively, for the DVC.

Conclusions : Transfer learning reduces the amount of manually annotated images required while producing high-quality automatic segmentations of the SCP and DVC that exceed inter-rater comparisons. The resulting intercapillary area quantification provides a tool for in-depth clinical analysis of retinal perfusion.

Translational Relevance : Accurate retinal microvasculature segmentation with the CNN results in improved perfusion analysis in diabetic retinopathy.

Lo Julian, Heisler Morgan, Vanzan Vinicius, Karst Sonja, Matovinović Ivana Zadro, Lončarić Sven, Navajas Eduardo V, Beg Mirza Faisal, Šarunić Marinko V

2020-Jul

angiography, diabetic retinopathy, machine learning, neural networks, optical coherence tomography

Ophthalmology Ophthalmology

Convolutional Neural Network Based on Fluorescein Angiography Images for Retinopathy of Prematurity Management.

In Translational vision science & technology

Purpose : The purpose of this study was to explore the use of fluorescein angiography (FA) images in a convolutional neural network (CNN) in the management of retinopathy of prematurity (ROP).

Methods : The dataset involved a total of 835 FA images of 149 eyes (90 patients), where each eye was associated with a binary outcome (57 "untreated" eyes and 92 "treated"; 308 "untreated" images, 527 "treated"). The resolution of the images was 1600 and 1200 px in 20% of cases, whereas the remaining 80% had a resolution of 640 and 480 px. All the images were resized to 640 and 480 px before training and no other preprocessing was applied. A CNN with four convolutional layers was trained on 90% of the images (n = 752) randomly chosen. The accuracy of the prediction was assessed on the remaining 10% of images (n = 83). Keras version 2.2.0 for R with Tensorflow backend version 1.11.0 was used for the analysis.

Results : The validation accuracy after 100 epochs was 0.88, whereas training accuracy was 0.97. The receiver operating characteristic (ROC) presented an area under the curve (AUC) of 0.91.

Conclusions : Our study showed, we believe for the first time, the applicability of artificial intelligence (CNN) technology in the ROP management driven by FA. Further studies are needed to exploit different fields of applications of this technology.

Translational Relevance : This algorithm is the basis for a system that could be applied to both ROP as well as experimental oxygen induced retinopathy.

Lepore Domenico, Ji Marco H, Pagliara Monica M, Lenkowicz Jacopo, Capocchiano Nikola D, Tagliaferri Luca, Boldrini Luca, Valentini Vincenzo, Damiani Andrea

2020-Jul

deep leaning, fluorescein angiography, retinopathy of prematurity

Radiology Radiology

Color Doppler Ultrasound Improves Machine Learning Diagnosis of Breast Cancer.

In Diagnostics (Basel, Switzerland)

Color Doppler is used in the clinic for visually assessing the vascularity of breast masses on ultrasound, to aid in determining the likelihood of malignancy. In this study, quantitative color Doppler radiomics features were algorithmically extracted from breast sonograms for machine learning, producing a diagnostic model for breast cancer with higher performance than models based on grayscale and clinical category from the Breast Imaging Reporting and Data System for ultrasound (BI-RADSUS). Ultrasound images of 159 solid masses were analyzed. Algorithms extracted nine grayscale features and two color Doppler features. These features, along with patient age and BI-RADSUS category, were used to train an AdaBoost ensemble classifier. Though training on computer-extracted grayscale features and color Doppler features each significantly increased performance over that of models trained on clinical features, as measured by the area under the receiver operating characteristic (ROC) curve, training on both color Doppler and grayscale further increased the ROC area, from 0.925 ± 0.022 to 0.958 ± 0.013. Pruning low-confidence cases at 20% improved this to 0.986 ± 0.007 with 100% sensitivity, whereas 64% of the cases had to be pruned to reach this performance without color Doppler. Fewer borderline diagnoses and higher ROC performance were both achieved for diagnostic models of breast cancer on ultrasound by machine learning on color Doppler features.

Moustafa Afaf F, Cary Theodore W, Sultan Laith R, Schultz Susan M, Conant Emily F, Venkatesh Santosh S, Sehgal Chandra M

2020-Aug-25

breast cancer, color Doppler, machine learning, radiomics, ultrasound

Public Health Public Health

Big Data Analytics in the Fight against Major Public Health Incidents (Including COVID-19): A Conceptual Framework.

In International journal of environmental research and public health ; h5-index 73.0

Major public health incidents such as COVID-19 typically have characteristics of being sudden, uncertain, and hazardous. If a government can effectively accumulate big data from various sources and use appropriate analytical methods, it may quickly respond to achieve optimal public health decisions, thereby ameliorating negative impacts from a public health incident and more quickly restoring normality. Although there are many reports and studies examining how to use big data for epidemic prevention, there is still a lack of an effective review and framework of the application of big data in the fight against major public health incidents such as COVID-19, which would be a helpful reference for governments. This paper provides clear information on the characteristics of COVID-19, as well as key big data resources, big data for the visualization of pandemic prevention and control, close contact screening, online public opinion monitoring, virus host analysis, and pandemic forecast evaluation. A framework is provided as a multidimensional reference for the effective use of big data analytics technology to prevent and control epidemics (or pandemics). The challenges and suggestions with respect to applying big data for fighting COVID-19 are also discussed.

Jia Qiong, Guo Yue, Wang Guanlin, Barnes Stuart J

2020-Aug-25

COVID-19, big data analysis, deep learning, epidemic prevention and control, major public health incidents, predictive analysis, visual analysis

Ophthalmology Ophthalmology

Transfer Learning for Automated OCTA Detection of Diabetic Retinopathy.

In Translational vision science & technology

Purpose : To test the feasibility of using deep learning for optical coherence tomography angiography (OCTA) detection of diabetic retinopathy.

Methods : A deep-learning convolutional neural network (CNN) architecture, VGG16, was employed for this study. A transfer learning process was implemented to retrain the CNN for robust OCTA classification. One dataset, consisting of images of 32 healthy eyes, 75 eyes with diabetic retinopathy (DR), and 24 eyes with diabetes but no DR (NoDR), was used for training and cross-validation. A second dataset consisting of 20 NoDR and 26 DR eyes was used for external validation. To demonstrate the feasibility of using artificial intelligence (AI) screening of DR in clinical environments, the CNN was incorporated into a graphical user interface (GUI) platform.

Results : With the last nine layers retrained, the CNN architecture achieved the best performance for automated OCTA classification. The cross-validation accuracy of the retrained classifier for differentiating among healthy, NoDR, and DR eyes was 87.27%, with 83.76% sensitivity and 90.82% specificity. The AUC metrics for binary classification of healthy, NoDR, and DR eyes were 0.97, 0.98, and 0.97, respectively. The GUI platform enabled easy validation of the method for AI screening of DR in a clinical environment.

Conclusions : With a transfer learning process for retraining, a CNN can be used for robust OCTA classification of healthy, NoDR, and DR eyes. The AI-based OCTA classification platform may provide a practical solution to reducing the burden of experienced ophthalmologists with regard to mass screening of DR patients.

Translational Relevance : Deep-learning-based OCTA classification can alleviate the need for manual graders and improve DR screening efficiency.

Le David, Alam Minhaj, Yao Cham K, Lim Jennifer I, Hsieh Yi-Ting, Chan Robison V P, Toslak Devrim, Yao Xincheng

2020-Jul

artificial intelligence, deep learning, detection, diabetic retinopathy, screening

Pathology Pathology

Clinically applicable histopathological diagnosis system for gastric cancer detection using deep learning.

In Nature communications ; h5-index 260.0

The early detection and accurate histopathological diagnosis of gastric cancer increase the chances of successful treatment. The worldwide shortage of pathologists offers a unique opportunity for the use of artificial intelligence assistance systems to alleviate the workload and increase diagnostic accuracy. Here, we report a clinically applicable system developed at the Chinese PLA General Hospital, China, using a deep convolutional neural network trained with 2,123 pixel-level annotated H&E-stained whole slide images. The model achieves a sensitivity near 100% and an average specificity of 80.6% on a real-world test dataset with 3,212 whole slide images digitalized by three scanners. We show that the system could aid pathologists in improving diagnostic accuracy and preventing misdiagnoses. Moreover, we demonstrate that our system performs robustly with 1,582 whole slide images from two other medical centres. Our study suggests the feasibility and benefits of using histopathological artificial intelligence assistance systems in routine practice scenarios.

Song Zhigang, Zou Shuangmei, Zhou Weixun, Huang Yong, Shao Liwei, Yuan Jing, Gou Xiangnan, Jin Wei, Wang Zhanbo, Chen Xin, Ding Xiaohui, Liu Jinhong, Yu Chunkai, Ku Calvin, Liu Cancheng, Sun Zhuo, Xu Gang, Wang Yuefeng, Zhang Xiaoqing, Wang Dandan, Wang Shuhao, Xu Wei, Davis Richard C, Shi Huaiyin

2020-Aug-27