Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Ophthalmology Ophthalmology

Characterization of the retinal vasculature in fundus photos using the PanOptic iExaminer system.

In Eye and vision (London, England)

Background : The goal was to characterize retinal vasculature by quantitative analysis of arteriole-to-venule (A/V) ratio and vessel density in fundus photos taken with the PanOptic iExaminer System.

Methods : The PanOptic ophthalmoscope equipped with a smartphone was used to acquire fundus photos centered on the optic nerve head. Two fundus photos of a total of 19 eyes from 10 subjects were imaged. Retinal vessels were analyzed to obtain the A/V ratio. In addition, the vessel tree was extracted using deep learning U-NET, and vessel density was processed by the percentage of pixels within vessels over the entire image.

Results : All images were successfully processed for the A/V ratio and vessel density. There was no significant difference of averaged A/V ratio between the first (0.77 ± 0.09) and second (0.77 ± 0.10) measurements (P = 0.53). There was no significant difference of averaged vessel density (%) between the first (6.11 ± 1.39) and second (6.12 ± 1.40) measurements (P = 0.85).

Conclusions : Quantitative analysis of the retinal vasculature was feasible in fundus photos taken using the PanOptic ophthalmoscope. The device appears to provide sufficient image quality for analyzing A/V ratio and vessel density with the benefit of portability, easy data transferring, and low cost of the device, which could be used for pre-clinical screening of systemic, cerebral and ocular diseases.

Hu Huiling, Wei Haicheng, Xiao Mingxia, Jiang Liqiong, Wang Huijuan, Jiang Hong, Rundek Tatjana, Wang Jianhua


Arteriovenous ratio, Deep learning, Image analysis, Retina, Smartphone ophthalmoscope, Vessel density

Public Health Public Health

Applying machine learning on health record data from general practitioners to predict suicidality.

In Internet interventions

Background : Suicidal behaviour is difficult to detect in the general practice. Machine learning (ML) algorithms using routinely collected data might support General Practitioners (GPs) in the detection of suicidal behaviour. In this paper, we applied machine learning techniques to support GPs recognizing suicidal behaviour in primary care patients using routinely collected general practice data.

Methods : This case-control study used data from a national representative primary care database including over 1.5 million patients (Nivel Primary Care Database). Patients with a suicide (attempt) in 2017 were selected as cases (N = 574) and an at risk control group (N = 207,308) was selected from patients with psychological vulnerability but without a suicide attempt in 2017. RandomForest was trained on a small subsample of the data (training set), and evaluated on unseen data (test set).

Results : Almost two-third (65%) of the cases visited their GP within the last 30 days before the suicide (attempt). RandomForest showed a positive predictive value (PPV) of 0.05 (0.04-0.06), with a sensitivity of 0.39 (0.32-0.47) and area under the curve (AUC) of 0.85 (0.81-0.88). Almost all controls were accurately labeled as controls (specificity = 0.98 (0.97-0.98)). Among a sample of 650 at-risk primary care patients, the algorithm would label 20 patients as high-risk. Of those, one would be an actual case and additionally, one case would be missed.

Conclusion : In this study, we applied machine learning to predict suicidal behaviour using general practice data. Our results showed that these techniques can be used as a complementary step in the identification and stratification of patients at risk of suicidal behaviour. The results are encouraging and provide a first step to use automated screening directly in clinical practice. Additional data from different social domains, such as employment and education, might improve accuracy.

van Mens Kasper, Elzinga Elke, Nielen Mark, Lokkerbol Joran, Poortvliet Rune, Donker Gé, Heins Marianne, Korevaar Joke, Dückers Michel, Aussems Claire, Helbich Marco, Tiemens Bea, Gilissen Renske, Beekman Aartjan, de Beurs Derek


Electronic health records, General practice, Machine learning, Suicide

General General

Generative-Discriminative Complementary Learning.

In Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

The majority of state-of-the-art deep learning methods are discriminative approaches, which model the conditional distribution of labels given inputs features. The success of such approaches heavily depends on high-quality labeled instances, which are not easy to obtain, especially as the number of candidate classes increases. In this paper, we study the complementary learning problem. Unlike ordinary labels, complementary labels are easy to obtain because an annotator only needs to provide a yes/no answer to a randomly chosen candidate class for each instance. We propose a generative-discriminative complementary learning method that estimates the ordinary labels by modeling both the conditional (discriminative) and instance (generative) distributions. Our method, we call Complementary Conditional GAN (CCGAN), improves the accuracy of predicting ordinary labels and is able to generate high-quality instances in spite of weak supervision. In addition to the extensive empirical studies, we also theoretically show that our model can retrieve the true conditional distribution from the complementarily-labeled data.

Xu Yanwu, Gong Mingming, Chen Junxiang, Liu Tongliang, Zhang Kun, Batmanghelich Kayhan


General General

Screening for obstructive sleep apnea with novel hybrid acoustic smartphone app technology.

In Journal of thoracic disease ; h5-index 52.0

Background : Obstructive sleep apnea (OSA) has a high prevalence, with an estimated 425 million adults with apnea hypopnea index (AHI) of ≥15 events/hour, and is significantly underdiagnosed. This presents a significant pain point for both the sufferers, and for healthcare systems, particularly in a post COVID-19 pandemic world. As such, it presents an opportunity for new technologies that can enable screening in both developing and developed countries. In this work, the performance of a non-contact OSA screener App that can run on both Apple and Android smartphones is presented.

Methods : The subtle breathing patterns of a person in bed can be measured via a smartphone using the "Firefly" app technology platform [and underpinning software development kit (SDK)], which utilizes advanced digital signal processing (DSP) technology and artificial intelligence (AI) algorithms to identify detailed sleep stages, respiration rate, snoring, and OSA patterns. The smartphone is simply placed adjacent to the subject, such as on a bedside table, night stand or shelf, during the sleep session. The system was trained on a set of 128 overnights recorded at a sleep laboratory, where volunteers underwent simultaneous full polysomnography (PSG), and "Firefly" smartphone app analysis. A separate independent test set of 120 recordings was collected across a range of Apple iOS and Android smartphones, and withheld for performance evaluation by a different team. An operating point tuned for mid-sensitivity (i.e., balancing sensitivity and specificity) was chosen for the screener.

Results : The performance on the test set is comparable to ambulatory OSA screeners, and other smartphone screening apps, with a sensitivity of 88.3% and specificity of 80.0% [with receiver operating characteristic (ROC) area under the curve (AUC) of 0.92], for a clinical threshold for the AHI of ≥15 events/hour of detected sleep time.

Conclusions : The "Firefly" app based sensing technology offers the potential to significantly lower the barrier of entry to OSA screening, as no hardware (other than the user's personal smartphone) is required. Additionally, multi-night analysis is possible in the home environment, without requiring the wearing of a portable PSG or other home sleep test (HST).

Tiron Roxana, Lyon Graeme, Kilroy Hannah, Osman Ahmed, Kelly Nicola, O’Mahony Niall, Lopes Cesar, Coffey Sam, McMahon Stephen, Wren Michael, Conway Kieran, Fox Niall, Costello John, Shouldice Redmond, Lederer Katharina, Fietze Ingo, Penzel Thomas


Sleep-disordered breathing (SDB), apnea hypopnea index (AHI), obstructive sleep apnea (OSA), screening, smartphone

General General

In vivo identification of apoptotic and extracellular vesicle-bound live cells using image-based deep learning.

In Journal of extracellular vesicles

The in vivo detection of dead cells remains a major challenge due to technical hurdles. Here, we present a novel method, where injection of fluorescent milk fat globule-EGF factor 8 protein (MFG-E8) in vivo combined with imaging flow cytometry and deep learning allows the identification of dead cells based on their surface exposure of phosphatidylserine (PS) and other image parameters. A convolutional autoencoder (CAE) was trained on defined pictures and successfully used to identify apoptotic cells in vivo. However, unexpectedly, these analyses also revealed that the great majority of PS+ cells were not apoptotic, but rather live cells associated with PS+ extracellular vesicles (EVs). During acute viral infection apoptotic cells increased slightly, while up to 30% of lymphocytes were decorated with PS+ EVs of antigen-presenting cell (APC) exosomal origin. The combination of recombinant fluorescent MFG-E8 and the CAE-method will greatly facilitate analyses of cell death and EVs in vivo.

Kranich Jan, Chlis Nikolaos-Kosmas, Rausch Lisa, Latha Ashretha, Schifferer Martina, Kurz Tilman, Foltyn-Arfa Kia Agnieszka, Simons Mikael, Theis Fabian J, Brocker Thomas


Extracellular Vesicles, apoptosis, dendritic cells, exosomes, irradiation, viral Infection

Surgery Surgery

Machine intelligence for nerve conduit design and production.

In Journal of biological engineering

Nerve guidance conduits (NGCs) have emerged from recent advances within tissue engineering as a promising alternative to autografts for peripheral nerve repair. NGCs are tubular structures with engineered biomaterials, which guide axonal regeneration from the injured proximal nerve to the distal stump. NGC design can synergistically combine multiple properties to enhance proliferation of stem and neuronal cells, improve nerve migration, attenuate inflammation and reduce scar tissue formation. The aim of most laboratories fabricating NGCs is the development of an automated process that incorporates patient-specific features and complex tissue blueprints (e.g. neurovascular conduit) that serve as the basis for more complicated muscular and skin grafts. One of the major limitations for tissue engineering is lack of guidance for generating tissue blueprints and the absence of streamlined manufacturing processes. With the rapid expansion of machine intelligence, high dimensional image analysis, and computational scaffold design, optimized tissue templates for 3D bioprinting (3DBP) are feasible. In this review, we examine the translational challenges to peripheral nerve regeneration and where machine intelligence can innovate bottlenecks in neural tissue engineering.

Stewart Caleb E, Kan Chin Fung Kelvin, Stewart Brody R, Sanicola Henry W, Jung Jangwook P, Sulaiman Olawale A R, Wang Dadong


Artificial intelligence, Bioprinting, Computer vision, Data science, Machine learning, Nerve regeneration, Tissue engineering