Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Deep learning based automatic inpainting for material microscopic images.

In Journal of microscopy

The microscopic image is important data for recording the microstructure information of materials. Researchers usually use image processing algorithms to extract material features from that and then characterize the material microstructure. However, the microscopic images obtained by a microscope often have random damaged regions, which will cause the loss of information and thus inevitably influence the accuracy of microstructural characterization, even lead to a wrong result. To handle this problem, we provide a deep learning based fully automatic method for detecting and inpainting damaged regions in material microscopic images, which can automatically inpaint damaged regions with different positions and shapes, as well as we also use a data augmentation method to improve the performance of inpainting model. We evaluate our method on Al-La alloy microscopic images, which indicates that our method can achieve promising performance on inpainted and material microstructure characterization results compared to other image inpainting software for both accuracy and time consumption. This article is protected by copyright. All rights reserved.

Ma B, Ma B, Gao M, Wang Z, Ban X, Huang H, Wu W

2020-Sep-09

Deep Learning, Image Inpainting, Microscopic Image Processing

Radiology Radiology

Artificial Intelligence Predictive Analytics in the Management of Outpatient MRI Appointment No-Shows.

In AJR. American journal of roentgenology

OBJECTIVE. Outpatient appointment no-shows are a common problem. Artificial intelligence predictive analytics can potentially facilitate targeted interventions to improve efficiency. We describe a quality improvement project that uses machine learning techniques to predict and reduce outpatient MRI appointment no-shows. MATERIALS AND METHODS. Anonymized records from 32,957 outpatient MRI appointments between 2016 and 2018 were acquired for model training and validation along with a holdout test set of 1080 records from January 2019. The overall no-show rate was 17.4%. A predictive model developed with XGBoost, a decision tree-based ensemble machine learning algorithm that uses a gradient boosting framework, was deployed after various machine learning algorithms were evaluated. The simple intervention measure of using telephone call reminders for patients with the top 25% highest risk of an appointment no-show as predicted by the model was implemented over 6 months. RESULTS. The ROC AUC for the predictive model was 0.746 with an optimized F1 score of 0.708; at this threshold, the precision and recall were 0.606 and 0.852, respectively. The AUC for the holdout test set was 0.738 with an optimized F1 score of 0.721; at this threshold, the precision and recall were 0.605 and 0.893, respectively. The no-show rate 6 months after deployment of the predictive model was 15.9% compared with 19.3% in the preceding 12-month preintervention period, corresponding to a 17.2% improvement from the baseline no-show rate (p < 0.0001). The no-show rates of contactable and noncontactable patients in the group at high risk of appointment no-shows as predicted by the model were 17.5% and 40.3%, respectively (p < 0.0001). CONCLUSION. Machine learning predictive analytics perform moderately well in predicting complex problems involving human behavior using a modest amount of data with basic feature engineering, and they can be incorporated into routine workflow to improve health care delivery.

Chong Le Roy, Tsai Koh Tzan, Lee Lee Lian, Foo Seck Guan, Chang Piek Chim

2020-Sep-09

MRI, XGBoost, artificial intelligence, machine learning, no-show

General General

ADOpy: a python package for adaptive design optimization.

In Behavior research methods

Experimental design is fundamental to research, but formal methods to identify good designs are lacking. Advances in Bayesian statistics and machine learning offer algorithm-based ways to identify good experimental designs. Adaptive design optimization (ADO; Cavagnaro, Myung, Pitt, & Kujala, 2010; Myung, Cavagnaro, & Pitt, 2013) is one such method. It works by maximizing the informativeness and efficiency of data collection, thereby improving inference. ADO is a general-purpose method for conducting adaptive experiments on the fly and can lead to rapid accumulation of information about the phenomenon of interest with the fewest number of trials. The nontrivial technical skills required to use ADO have been a barrier to its wider adoption. To increase its accessibility to experimentalists at large, we introduce an open-source Python package, ADOpy, that implements ADO for optimizing experimental design. The package, available on GitHub, is written using high-level modular-based commands such that users do not have to understand the computational details of the ADO algorithm. In this paper, we first provide a tutorial introduction to ADOpy and ADO itself, and then illustrate its use in three walk-through examples: psychometric function estimation, delay discounting, and risky choice. Simulation data are also provided to demonstrate how ADO designs compare with other designs (random, staircase).

Yang Jaeyeong, Pitt Mark A, Ahn Woo-Young, Myung Jay I

2020-Sep-08

Bayesian adaptive experimentation, Cognitive modeling, Delay discounting, Optimal experimental design, Psychometric function estimation, Risky choice

General General

Atari-HEAD: Atari Human Eye-Tracking and Demonstration Dataset.

In Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

Large-scale public datasets have been shown to benefit research in multiple areas of modern artificial intelligence. For decision-making research that requires human data, high-quality datasets serve as important benchmarks to facilitate the development of new methods by providing a common reproducible standard. Many human decision-making tasks require visual attention to obtain high levels of performance. Therefore, measuring eye movements can provide a rich source of information about the strategies that humans use to solve decision-making tasks. Here, we provide a large-scale, high-quality dataset of human actions with simultaneously recorded eye movements while humans play Atari video games. The dataset consists of 117 hours of gameplay data from a diverse set of 20 games, with 8 million action demonstrations and 328 million gaze samples. We introduce a novel form of gameplay, in which the human plays in a semi-frame-by-frame manner. This leads to near-optimal game decisions and game scores that are comparable or better than known human records. We demonstrate the usefulness of the dataset through two simple applications: predicting human gaze and imitating human demonstrated actions. The quality of the data leads to promising results in both tasks. Moreover, using a learned human gaze model to inform imitation learning leads to an 115% increase in game performance. We interpret these results as highlighting the importance of incorporating human visual attention in models of decision making and demonstrating the value of the current dataset to the research community. We hope that the scale and quality of this dataset can provide more opportunities to researchers in the areas of visual attention, imitation learning, and reinforcement learning.

Zhang Ruohan, Walshe Calen, Liu Zhuode, Guan Lin, Muller Karl S, Whritner Jake A, Zhang Luxin, Hayhoe Mary M, Ballard Dana H

2020-Feb

General General

Human Gaze Assisted Artificial Intelligence: A Review.

In IJCAI : proceedings of the conference

Human gaze reveals a wealth of information about internal cognitive state. Thus, gaze-related research has significantly increased in computer vision, natural language processing, decision learning, and robotics in recent years. We provide a high-level overview of the research efforts in these fields, including collecting human gaze data sets, modeling gaze behaviors, and utilizing gaze information in various applications, with the goal of enhancing communication between these research areas. We discuss future challenges and potential applications that work towards a common goal of human-centered artificial intelligence.

Zhang Ruohan, Saran Akanksha, Liu Bo, Zhu Yifeng, Guo Sihang, Niekum Scott, Ballard Dana, Hayhoe Mary

2020-Jul

General General

The Utility of Resolving Asthma Molecular Signatures Using Tissue-Specific Transcriptome Data.

In G3 (Bethesda, Md.)

An integrative analysis focused on multi-tissue transcriptomics has not been done for asthma. Tissue-specific DEGs remain undetected in many multi-tissue analyses, which influences identification of disease-relevant pathways and potential drug candidates. Transcriptome data from 609 cases and 196 controls, generated using airway epithelium, bronchial, nasal, airway macrophages, distal lung fibroblasts, proximal lung fibroblasts, CD4+ lymphocytes, CD8+ lymphocytes from whole blood and induced sputum samples, were retrieved from Gene Expression Omnibus (GEO). Differentially regulated asthma-relevant genes identified from each sample type were used to identify (a) tissue-specific and tissue-shared asthma pathways, (b) their connection to GWAS-identified disease genes to identify candidate tissue for functional studies, (c) to select surrogate sample for invasive tissues, and finally (d) to identify potential drug candidates via connectivity map analysis. We found that inter-tissue similarity in gene expression was more pronounced at pathway/functional level than at gene level with highest similarity between bronchial epithelial cells and lung fibroblasts, and lowest between airway epithelium and whole blood samples. Although public-domain gene expression data is limited by inadequately annotated per-sample demographic and clinical information which limited the analysis, our tissue-resolved analysis clearly demonstrated relative importance of unique and shared asthma pathways, At the pathway level, IL-1b signaling and ERK signaling were significant in many tissue types, while Insulin-like growth factor and TGF-beta signaling were relevant in only airway epithelial tissue. IL-12 (in macrophages) and Immunoglobulin signaling (in lymphocytes) and chemokines (in nasal epithelium) were the highest expressed pathways. Overall, the IL-1 signaling genes (inflammatory) were relevant in the airway compartment, while pro-Th2 genes including IL-13 and STAT6 were more relevant in fibroblasts, lymphocytes, macrophages and bronchial biopsies. These genes were also associated with asthma in the GWAS catalog. Support Vector Machine showed that DEGs based on macrophages and epithelial cells have the highest and lowest discriminatory accuracy, respectively. Drug (entinostat, BMS-345541) and genetic perturbagens (KLF6, BCL10, INFB1 and BAMBI) negatively connected to disease at multi-tissue level could potentially repurposed for treating asthma. Collectively, our study indicates that the DEGs, perturbagens and disease are connected differentially depending on tissue/cell types. While most of the existing literature describes asthma transcriptome data from individual sample types, the present work demonstrates the utility of multi-tissue transcriptome data. Future studies should focus on collecting transcriptomic data from multiple tissues, age and race groups, genetic background, disease subtypes and on the availability of better-annotated data in the public domain.

Ghosh Debajyoti, Ding Lili, Bernstein Jonathan A, Mersha Tesfaye B

2020-Sep-08

Connectivity Map, GWAS Catalog, asthma transcriptome, ilincs, machine learning, pathways/networks, tissue-specific analysis