Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

A Deep Learning-Based System (Microscan) for the Identification of Pollen Development Stages and Its Application to Obtaining Doubled Haploid Lines in Eggplant.

In Biology

The development of double haploids (DHs) is a straightforward path for obtaining pure lines but has multiple bottlenecks. Among them is the determination of the optimal stage of pollen induction for androgenesis. In this work, we developed Microscan, a deep learning-based system for the detection and recognition of the stages of pollen development. In a first experiment, the algorithm was developed adapting the RetinaNet predictive model using microspores of different eggplant accessions as samples. A mean average precision of 86.30% was obtained. In a second experiment, the anther range to be cultivated in vitro was determined in three eggplant genotypes by applying the Microscan system. Subsequently, they were cultivated following two different androgenesis protocols (Cb and E6). The response was only observed in the anther size range predicted by Microscan, obtaining the best results with the E6 protocol. The plants obtained were characterized by flow cytometry and with the Single Primer Enrichment Technology high-throughput genotyping platform, obtaining a high rate of confirmed haploid and double haploid plants. Microscan has been revealed as a tool for the high-throughput efficient analysis of microspore samples, as it has been exemplified in eggplant by providing an increase in the yield of DHs production.

García-Fortea Edgar, García-Pérez Ana, Gimeno-Páez Esther, Sánchez-Gimeno Alfredo, Vilanova Santiago, Prohens Jaime, Pastor-Calle David

2020-Sep-05

RetinaNet, Solanum melongena, androgenesis, anther culture, microspores

General General

Early diagnosis of COVID-19-affected patients based on X-ray and computed tomography images using deep learning algorithm.

In Soft computing

The novel coronavirus infection (COVID-19) that was first identified in China in December 2019 has spread across the globe rapidly infecting over ten million people. The World Health Organization (WHO) declared it as a pandemic on March 11, 2020. What makes it even more critical is the lack of vaccines available to control the disease, although many pharmaceutical companies and research institutions all over the world are working toward developing effective solutions to battle this life-threatening disease. X-ray and computed tomography (CT) images scanning is one of the most encouraging exploration zones; it can help in finding and providing early diagnosis to diseases and gives both quick and precise outcomes. In this study, convolution neural networks method is used for binary classification pneumonia-based conversion of VGG-19, Inception_V2 and decision tree model on X-ray and CT scan images dataset, which contains 360 images. It can infer that fine-tuned version VGG-19, Inception_V2 and decision tree model show highly satisfactory performance with a rate of increase in training and validation accuracy (91%) other than Inception_V2 (78%) and decision tree (60%) models.

Dansana Debabrata, Kumar Raghvendra, Bhattacharjee Aishik, Hemanth D Jude, Gupta Deepak, Khanna Ashish, Castillo Oscar

2020-Aug-28

CNN, COVID-19, CT scan, Decision tree, Inception_V2, VGG-16, X-ray images

General General

Performance of object recognition in wearable videos

ArXiv Preprint

Wearable technologies are enabling plenty of new applications of computer vision, from life logging to health assistance. Many of them are required to recognize the elements of interest in the scene captured by the camera. This work studies the problem of object detection and localization on videos captured by this type of camera. Wearable videos are a much more challenging scenario for object detection than standard images or even another type of videos, due to lower quality images (e.g. poor focus) or high clutter and occlusion common in wearable recordings. Existing work typically focuses on detecting the objects of focus or those being manipulated by the user wearing the camera. We perform a more general evaluation of the task of object detection in this type of video, because numerous applications, such as marketing studies, also need detecting objects which are not in focus by the user. This work presents a thorough study of the well known YOLO architecture, that offers an excellent trade-off between accuracy and speed, for the particular case of object detection in wearable video. We focus our study on the public ADL Dataset, but we also use additional public data for complementary evaluations. We run an exhaustive set of experiments with different variations of the original architecture and its training strategy. Our experiments drive to several conclusions about the most promising directions for our goal and point us to further research steps to improve detection in wearable videos.

Alberto Sabater, Luis Montesano, Ana C. Murillo

2020-09-10

General General

COVID CT-Net: Predicting Covid-19 From Chest CT Images Using Attentional Convolutional Network

ArXiv Preprint

The novel corona-virus disease (COVID-19) pandemic has caused a major outbreak in more than 200 countries around the world, leading to a severe impact on the health and life of many people globally. As of Aug 25th of 2020, more than 20 million people are infected, and more than 800,000 death are reported. Computed Tomography (CT) images can be used as a as an alternative to the time-consuming "reverse transcription polymerase chain reaction (RT-PCR)" test, to detect COVID-19. In this work we developed a deep learning framework to predict COVID-19 from CT images. We propose to use an attentional convolution network, which can focus on the infected areas of chest, enabling it to perform a more accurate prediction. We trained our model on a dataset of more than 2000 CT images, and report its performance in terms of various popular metrics, such as sensitivity, specificity, area under the curve, and also precision-recall curve, and achieve very promising results. We also provide a visualization of the attention maps of the model for several test images, and show that our model is attending to the infected regions as intended. In addition to developing a machine learning modeling framework, we also provide the manual annotation of the potentionally infected regions of chest, with the help of a board-certified radiologist, and make that publicly available for other researchers.

Shakib Yazdani, Shervin Minaee, Rahele Kafieh, Narges Saeedizadeh, Milan Sonka

2020-09-10

Surgery Surgery

Feasibility of Training a Random Forest Model With Incomplete User-Specific Data for Devising a Control Strategy for Active Biomimetic Ankle.

In Frontiers in bioengineering and biotechnology

Intelligent control strategies for active biomimetic prostheses could exploit the inter-joint coordination of limbs in human gait in order to mimic the functioning of a biological joint. A machine learning regression model could be employed to learn an input-output relationship between the coordinated limb motion in human gait and predict the motion of a particular limb/joint given the motion of other limbs/joints. Such a model could be potentially used as a controller for an intelligent prosthesis which aims to restore the functioning similar to an intact biological joint. For this, the model needs to be tailored for each user by learning the gait pattern specific to the user. The challenge of training such machine learning regression models in prosthetic control is that, the desired reference output cannot be obtained from an amputee due to the missing limb. In this study, we investigate the feasibility of using two different methods for training a random forest algorithm using incomplete amputee-specific data to predict the ankle kinematics and dynamics from hip, knee, and shank kinematics. First is an inter-subject approach which learns a generalized input-output relationship from a group of able-bodied individuals and then applies this generalized relationship to amputees. Second is a subject-specific approach which maps the amputee's inputs to a desired normative reference output calculated from able-bodied individuals. The subject-specific model outperformed the inter-subject model in predicting the ankle angle and moment in most cases and can be potentially used for devising a control strategy for an intelligent biomimetic ankle.

Dey Sharmita, Yoshida Takashi, Schilling Arndt F

2020

human gait, intelligent biomimetics, prediction, prosthetic control, random forest

Surgery Surgery

A Machine Learning-Based Prediction of Hospital Mortality in Patients With Postoperative Sepsis.

In Frontiers in medicine

Introduction: The incidence of postoperative sepsis is continually increased, while few studies have specifically focused on the risk factors and clinical outcomes associated with the development of sepsis after surgical procedures. The present study aimed to develop a mathematical model for predicting the in-hospital mortality among patients with postoperative sepsis. Materials and Methods: Surgical patients in Medical Information Mart for Intensive Care (MIMIC-III) database who simultaneously fulfilled Sepsis 3.0 and Agency for Healthcare Research and Quality (AHRQ) criteria at ICU admission were incorporated. We employed both extreme gradient boosting (XGBoost) and stepwise logistic regression model to predict the in-hospital mortality among patients with postoperative sepsis. Consequently, the model performance was assessed from the angles of discrimination and calibration. Results: We included 3,713 patients who fulfilled our inclusion criteria, in which 397 (10.7%) patients died during hospitalization, and 3,316 (89.3%) patients survived through discharge. Fluid-electrolyte disturbance, coagulopathy, renal replacement therapy (RRT), urine output, and cardiovascular surgery were important features related to the in-hospital mortality. The XGBoost model had a better performance in both discriminatory ability (c-statistics, 0.835 vs. 0.737 and 0.621, respectively; AUPRC, 0.418 vs. 0.280 and 0.237, respectively) and goodness of fit (visualized by calibration curve) compared to the stepwise logistic regression model and baseline model. Conclusion: XGBoost model has a better performance in predicting hospital mortality among patients with postoperative sepsis in comparison to the stepwise logistic regression model. Machine learning-based algorithm might have significant application in the development of early warning system for septic patients following major operations.

Yao Ren-Qi, Jin Xin, Wang Guo-Wei, Yu Yue, Wu Guo-Sheng, Zhu Yi-Bing, Li Lin, Li Yu-Xuan, Zhao Peng-Yue, Zhu Sheng-Yu, Xia Zhao-Fan, Ren Chao, Yao Yong-Ming

2020

coagulation, extreme gradient boosting, intensive care unit, postoperative sepsis, prediction