Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Machine vision-driven automatic recognition of particle size and morphology in SEM images.

In Nanoscale ; h5-index 139.0

Scanning Electron Microscopy (SEM) images provide a variety of structural and morphological information of nanomaterials. In the material informatics domain, automatic recognition and quantitative analysis of SEM images in a high-throughput manner are critical, but challenges still remain due to the complexity and the diversity of image configurations in both shape and size. In this paper, we present a generally applicable approach using computer vision and machine learning techniques to quantitatively extract particle size, size distribution and morphology information in SEM images. The proposed pipeline offers automatic, high-throughput measurements even when overlapping nanoparticles, rod shapes, and core-shell nanostructures are present. We demonstrate effectiveness of the proposed approach by performing experiments on SEM images of nanoscale materials and structures with different shapes and sizes. The proposed approach shows promising results (Spearman coefficients of 0.91 and 0.99 using fully automated and semi-automated processes, respectively) when compared with manually measured sizes. The code is made available as open source software at https://github.com/LLNL/LIST.

Kim Hyojin, Han Jinkyu, Han T Yong-Jin

2020-Sep-22

Public Health Public Health

The Diabits App for Smartphone-Assisted Predictive Monitoring of Glycemia in Patients With Diabetes: Retrospective Observational Study.

In JMIR diabetes

BACKGROUND : Diabetes mellitus, which causes dysregulation of blood glucose in humans, is a major public health challenge. Patients with diabetes must monitor their glycemic levels to keep them in a healthy range. This task is made easier by using continuous glucose monitoring (CGM) devices and relaying their output to smartphone apps, thus providing users with real-time information on their glycemic fluctuations and possibly predicting future trends.

OBJECTIVE : This study aims to discuss various challenges of predictive monitoring of glycemia and examines the accuracy and blood glucose control effects of Diabits, a smartphone app that helps patients with diabetes monitor and manage their blood glucose levels in real time.

METHODS : Using data from CGM devices and user input, Diabits applies machine learning techniques to create personalized patient models and predict blood glucose fluctuations up to 60 min in advance. These predictions give patients an opportunity to take pre-emptive action to maintain their blood glucose values within the reference range. In this retrospective observational cohort study, the predictive accuracy of Diabits and the correlation between daily use of the app and blood glucose control metrics were examined based on real app users' data. Moreover, the accuracy of predictions on the 2018 Ohio T1DM (type 1 diabetes mellitus) data set was calculated and compared against other published results.

RESULTS : On the basis of more than 6.8 million data points, 30-min Diabits predictions evaluated using Parkes Error Grid were found to be 86.89% (5,963,930/6,864,130) clinically accurate (zone A) and 99.56% (6,833,625/6,864,130) clinically acceptable (zones A and B), whereas 60-min predictions were 70.56% (4,843,605/6,864,130) clinically accurate and 97.49% (6,692,165/6,864,130) clinically acceptable. By analyzing daily use statistics and CGM data for the 280 most long-standing users of Diabits, it was established that under free-living conditions, many common blood glucose control metrics improved with increased frequency of app use. For instance, the average blood glucose for the days these users did not interact with the app was 154.0 (SD 47.2) mg/dL, with 67.52% of the time spent in the healthy 70 to 180 mg/dL range. For days with 10 or more Diabits sessions, the average blood glucose decreased to 141.6 (SD 42.0) mg/dL (P<.001), whereas the time in euglycemic range increased to 74.28% (P<.001). On the Ohio T1DM data set of 6 patients with type 1 diabetes, 30-min predictions of the base Diabits model had an average root mean square error of 18.68 (SD 2.19) mg/dL, which is an improvement over the published state-of-the-art results for this data set.

CONCLUSIONS : Diabits accurately predicts future glycemic fluctuations, potentially making it easier for patients with diabetes to maintain their blood glucose in the reference range. Furthermore, an improvement in glucose control was observed on days with more frequent Diabits use.

Kriventsov Stan, Lindsey Alexander, Hayeri Amir

2020-Sep-22

artificial intelligence, blood glucose predictions, digital health, machine learning, mobile phone, type 1 diabetes

General General

BiteOscope, an open platform to study mosquito biting behavior.

In eLife

Female mosquitoes need a blood meal to reproduce, and in obtaining this essential nutrient they transmit deadly pathogens. Although crucial for the spread of mosquito-borne diseases, blood feeding remains poorly understood due to technological limitations. Indeed, studies often expose human subjects to assess biting behavior. Here, we present the biteOscope, a device that attracts mosquitoes to a host mimic which they bite to obtain an artificial blood meal. The host mimic is transparent, allowing high-resolution imaging of the feeding mosquito. Using machine learning we extract detailed behavioral statistics describing the locomotion, pose, biting, and feeding dynamics of Aedes aegypti, Aedes albopictus, Anopheles stephensi, and Anopheles coluzzii. In addition to characterizing behavioral patterns, we discover that the common insect repellent DEET repels Anopheles coluzzii upon contact with their legs. The biteOscope provides a new perspective on mosquito blood feeding, enabling the high-throughput quantitative characterization of this lethal behavior.

Hol Felix Jh, Lambrechts Louis, Prakash Manu

2020-Sep-22

ecology, neuroscience

Radiology Radiology

Analysis of Bone Scans in Various Tumor Entities Using a Deep-Learning-Based Artificial Neural Network Algorithm-Evaluation of Diagnostic Performance.

In Cancers

The bone scan index (BSI), initially introduced for metastatic prostate cancer, quantifies the osseous tumor load from planar bone scans. Following the basic idea of radiomics, this method incorporates specific deep-learning techniques (artificial neural network) in its development to provide automatic calculation, feature extraction, and diagnostic support. As its performance in tumor entities, not including prostate cancer, remains unclear, our aim was to obtain more data about this aspect. The results of BSI evaluation of bone scans from 951 consecutive patients with different tumors were retrospectively compared to clinical reports (bone metastases, yes/no). Statistical analysis included entity-specific receiver operating characteristics to determine optimized BSI cut-off values. In addition to prostate cancer (cut-off = 0.27%, sensitivity (SN) = 87%, specificity (SP) = 99%), the algorithm used provided comparable results for breast cancer (cut-off 0.18%, SN = 83%, SP = 87%) and colorectal cancer (cut-off = 0.10%, SN = 100%, SP = 90%). Worse performance was observed for lung cancer (cut-off = 0.06%, SN = 63%, SP = 70%) and renal cell carcinoma (cut-off = 0.30%, SN = 75%, SP = 84%). The algorithm did not perform satisfactorily in melanoma (SN = 60%). For most entities, a high negative predictive value (NPV ≥ 87.5%, melanoma 80%) was determined, whereas positive predictive value (PPV) was clinically not applicable. Automatically determined BSI showed good sensitivity and specificity in prostate cancer and various other entities. Particularly, the high NPV encourages applying BSI as a tool for computer-aided diagnostic in various tumor entities.

Wuestemann Jan, Hupfeld Sebastian, Kupitz Dennis, Genseke Philipp, Schenke Simone, Pech Maciej, Kreissl Michael C, Grosser Oliver S

2020-Sep-17

bone metastases, bone scan, bone scan index, deep learning, radiomics

General General

COVID-CAPS: A Capsule Network-based Framework for Identification of COVID-19 cases from X-ray Images.

In Pattern recognition letters

Novel Coronavirus disease (COVID-19) has abruptly and undoubtedly changed the world as we know it at the end of the 2nd decade of the 21st century. COVID-19 is extremely contagious and quickly spreading globally making its early diagnosis of paramount importance. Early diagnosis of COVID-19 enables health care professionals and government authorities to break the chain of transition and flatten the epidemic curve. The common type of COVID-19 diagnosis test, however, requires specific equipment and has relatively low sensitivity. Computed tomography (CT) scans and X-ray images, on the other hand, reveal specific manifestations associated with this disease. Overlap with other lung infections makes human-centered diagnosis of COVID-19 challenging. Consequently, there has been an urgent surge of interest to develop Deep Neural Network (DNN)-based diagnosis solutions, mainly based on Convolutional Neural Networks (CNNs), to facilitate identification of positive COVID-19 cases. CNNs, however, are prone to lose spatial information between image instances and require large datasets. The paper presents an alternative modeling framework based on Capsule Networks, referred to as the COVID-CAPS, being capable of handling small datasets, which is of significant importance due to sudden and rapid emergence of COVID-19. Our results based on a dataset of X-ray images show that COVID-CAPS has advantage over previous CNN-based models. COVID-CAPS achieved an Accuracy of 95.7%, Sensitivity of 90%, Specificity of 95.8%, and Area Under the Curve (AUC) of 0.97, while having far less number of trainable parameters in comparison to its counterparts. To potentially and further improve diagnosis capabilities of the COVID-CAPS, pre-training and transfer learning are utilized based on a new dataset constructed from an external dataset of X-ray images. This is in contrary to existing works on COVID-19 detection where pre-training is performed based on natural images. Pre-training with a dataset of similar nature further improved accuracy to 98.3% and specificity to 98.6%.

Afshar Parnian, Heidarian Shahin, Naderkhani Farnoosh, Oikonomou Anastasia, Plataniotis Konstantinos N, Mohammadi Arash

2020-Sep-16

COVID-19 Pandemic, Capsule Network, Deep Learning, X-ray Images

General General

Accurate and Interpretable Machine Learning for Transparent Pricing of Health Insurance Plans

ArXiv Preprint

Health insurance companies cover half of the United States population through commercial employer-sponsored health plans and pay 1.2 trillion US dollars every year to cover medical expenses for their members. The actuary and underwriter roles at a health insurance company serve to assess which risks to take on and how to price those risks to ensure profitability of the organization. While Bayesian hierarchical models are the current standard in the industry to estimate risk, interest in machine learning as a way to improve upon these existing methods is increasing. Lumiata, a healthcare analytics company, ran a study with a large health insurance company in the United States. We evaluated the ability of machine learning models to predict the per member per month cost of employer groups in their next renewal period, especially those groups who will cost less than 95\% of what an actuarial model predicts (groups with "concession opportunities"). We developed a sequence of two models, an individual patient-level and an employer-group-level model, to predict the annual per member per month allowed amount for employer groups, based on a population of 14 million patients. Our models performed 20\% better than the insurance carrier's existing pricing model, and identified 84\% of the concession opportunities. This study demonstrates the application of a machine learning system to compute an accurate and fair price for health insurance products and analyzes how explainable machine learning models can exceed actuarial models' predictive accuracy while maintaining interpretability.

Rohun Kshirsagar, Li-Yen Hsu, Charles H. Greenberg, Matthew McClelland, Anushadevi Mohan, Wideet Shende, Nicolas P. Tilmans, Min Guo, Ankit Chheda, Meredith Trotter, Shonket Ray, Miguel Alvarado

2020-09-23