Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Radiology Radiology

Artificial Intelligence in Renal Mass Characterization: A Systematic Review of Methodologic Items Related to Modeling, Performance Evaluation, Clinical Utility, and Transparency.

In AJR. American journal of roentgenology

OBJECTIVE. The objective of our study was to systematically review the literature about the application of artificial intelligence (AI) to renal mass characterization with a focus on the methodologic quality items. MATERIALS AND METHODS. A systematic literature search was conducted using PubMed to identify original research studies about the application of AI to renal mass characterization. Besides baseline study characteristics, a total of 15 methodologic quality items were extracted and evaluated on the basis of the following four main categories: modeling, performance evaluation, clinical utility, and transparency items. The qualitative synthesis was presented using descriptive statistics with an accompanying narrative. RESULTS. Thirty studies were included in this systematic review. Overall, the methodologic quality items were mostly favorable for modeling (63%) and performance evaluation (63%). Even so, the studies (57%) more frequently constructed their work on nonrobust features. Furthermore, only a few studies (10%) had a generalizability assessment with independent or external validation. The studies were mostly unsuccessful in terms of clinical utility evaluation (89%) and transparency (97%) items. For clinical utility, the interesting findings were lack of comparisons with both radiologists' evaluation (87%) and traditional models (70%) in most of the studies. For transparency, most studies (97%) did not share their data with the public. CONCLUSION. To bring AI-based renal mass characterization from research to practice, future studies need to improve modeling and performance evaluation strategies and pay attention to clinical utility and transparency issues.

Kocak Burak, Kaya Ozlem Korkmaz, Erdim Cagri, Kus Ece Ates, Kilickesmez Ozgur

2020-Sep-22

artificial intelligence (AI), machine learning, radiomics, renal cell carcinoma, renal mass

General General

Deciphering High-order Structural Correlations within Fluxional Molecules from Classical and Quantum Configurational Entropy.

In Journal of chemical theory and computation

We employ the k-th nearest-neighbor estimator of configurational entropy in order to decode within a parameter-free numerical approach the complex high-order structural correlations in fluxional molecules going much beyond the usual linear, bivariate correlations. This generic entropy-based scheme for determining many-body correlations is applied to the complex configurational ensemble of protonated acetylene, a prototype for fluxional molecules featuring large-amplitude motion. After revealing the importance of high-order correlations beyond the simple two-coordinate picture for this molecule, we analyze in detail the evolution of the relevant correlations with temperature as well as the impact of nuclear quantum effects down to the ultra-low temperature regime of 1 K. We find that quantum delocalization and zero-point vibrations significantly reduce all correlations in protonated acetylene in the deep quantum regime. Even at low temperatures up to about 100 K, most correlations are essentially absent in the quantum case and only gain importance at higher temperatures. In the high temperature regime, beyond roughly 800 K, the increasing thermal fluctuations are found to exert a destructive effect on the presence of correlations. dm At intermediate temperatures of approximately 100 to 800 K, a quantum-to-classical cross-over regime is found where classical mechanics starts to correctly describe trends in the correlations whereas it even qualitatively fails below 100 K. Finally, a classical description of the nuclei provides correlations that are in quantitative agreement with the quantum ones only at temperatures exceeding 1000 K. This data-intensive analysis has been made possible due to recent developments of machine learning techniques based on high-dimensional neural network potential energy surfaces in full dimensionality that allow us to exhaustively sample both, the classical and quantum ensemble of protonated acetylene at essentially converged coupled cluster accuracy from 1 to more than 1000 K. The presented non-parametric analysis of correlations beyond usual linear two-coordinate terms is transferable to other system classes. The technique is also expected to complement and guide the analysis of experimental measurements, in particular multi-dimensional vibrational spectroscopy, by revealing the complex coupling between various degrees of freedom.

Topolnicki RafaƂ, Brieuc Fabien, Schran Christoph, Marx Dominik

2020-Sep-22

General General

Surrogates and Artificial Intelligence: Why AI Trumps Family.

In Science and engineering ethics

The increasing accuracy of algorithms to predict values and preferences raises the possibility that artificial intelligence technology will be able to serve as a surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm (AA). Such an algorithm would mine medical research, health records, and social media data to predict patient treatment preferences. The possibility of developing the AA raises the ethical question of whether the AA or a relative ought to serve as surrogate decision-maker in cases where the patient has not issued a medical power of attorney. We argue that in such cases, and against the standard practice of vesting familial surrogates with decision making authority, the AA should have sole decision-making authority. This is because the AA will likely be better at predicting what treatment option the patient would have chosen. It would also be better at avoiding bias and, therefore, choosing in a more patient-centered manner. Furthermore, we argue that these considerations override any moral weight of the patient's special relationship with their relatives.

Hubbard Ryan, Greenblum Jake

2020-Sep-22

Artificial intelligence (AI), Biomedical, Decision-making, Ethics, Surrogate

General General

Image-based state-of-the-art techniques for the identification and classification of brain diseases: a review.

In Medical & biological engineering & computing ; h5-index 32.0

Detection and classification methods have a vital and important role in identifying brain diseases. Timely detection and classification of brain diseases enable an accurate identification and effective management of brain impairment. Brain disorders are commonly most spreadable diseases and the diagnosing process is time-consuming and highly expensive. There is an utmost need to develop effective and advantageous methods for brain diseases detection and characterization. Magnetic resonance imaging (MRI), computed tomography (CT), and other various brain imaging scans are used to identify different brain diseases and disorders. Brain imaging scans are the efficient tool to understand the anatomical changes in brain in fast and accurate manner. These different brain imaging scans used with segmentation techniques and along with machine learning and deep learning techniques give maximum accuracy and efficiency. This paper focuses on different conventional approaches, machine learning and deep learning techniques used for the detection, and classification of brain diseases and abnormalities. This paper also summarizes the research gap and problems in the existing techniques used for detection and classification of brain disorders. Comparison and evaluation of different machine learning and deep learning techniques in terms of efficiency and accuracy are also highlighted in this paper. Furthermore, different brain diseases like leukoariaosis, Alzheimer's, Parkinson's, and Wilson's disorder are studied in the scope of machine learning and deep learning techniques.

Haq Ejaz Ul, Huang Jianjun, Kang Li, Haq Hafeez Ul, Zhan Tijiang

2020-Sep-22

Brain diseases, Brain imaging scan, Computed tomography, Deep learning, Machine learning, Magnetic resonance imaging, Segmentation techniques

General General

Machine vision-driven automatic recognition of particle size and morphology in SEM images.

In Nanoscale ; h5-index 139.0

Scanning Electron Microscopy (SEM) images provide a variety of structural and morphological information of nanomaterials. In the material informatics domain, automatic recognition and quantitative analysis of SEM images in a high-throughput manner are critical, but challenges still remain due to the complexity and the diversity of image configurations in both shape and size. In this paper, we present a generally applicable approach using computer vision and machine learning techniques to quantitatively extract particle size, size distribution and morphology information in SEM images. The proposed pipeline offers automatic, high-throughput measurements even when overlapping nanoparticles, rod shapes, and core-shell nanostructures are present. We demonstrate effectiveness of the proposed approach by performing experiments on SEM images of nanoscale materials and structures with different shapes and sizes. The proposed approach shows promising results (Spearman coefficients of 0.91 and 0.99 using fully automated and semi-automated processes, respectively) when compared with manually measured sizes. The code is made available as open source software at https://github.com/LLNL/LIST.

Kim Hyojin, Han Jinkyu, Han T Yong-Jin

2020-Sep-22

Public Health Public Health

The Diabits App for Smartphone-Assisted Predictive Monitoring of Glycemia in Patients With Diabetes: Retrospective Observational Study.

In JMIR diabetes

BACKGROUND : Diabetes mellitus, which causes dysregulation of blood glucose in humans, is a major public health challenge. Patients with diabetes must monitor their glycemic levels to keep them in a healthy range. This task is made easier by using continuous glucose monitoring (CGM) devices and relaying their output to smartphone apps, thus providing users with real-time information on their glycemic fluctuations and possibly predicting future trends.

OBJECTIVE : This study aims to discuss various challenges of predictive monitoring of glycemia and examines the accuracy and blood glucose control effects of Diabits, a smartphone app that helps patients with diabetes monitor and manage their blood glucose levels in real time.

METHODS : Using data from CGM devices and user input, Diabits applies machine learning techniques to create personalized patient models and predict blood glucose fluctuations up to 60 min in advance. These predictions give patients an opportunity to take pre-emptive action to maintain their blood glucose values within the reference range. In this retrospective observational cohort study, the predictive accuracy of Diabits and the correlation between daily use of the app and blood glucose control metrics were examined based on real app users' data. Moreover, the accuracy of predictions on the 2018 Ohio T1DM (type 1 diabetes mellitus) data set was calculated and compared against other published results.

RESULTS : On the basis of more than 6.8 million data points, 30-min Diabits predictions evaluated using Parkes Error Grid were found to be 86.89% (5,963,930/6,864,130) clinically accurate (zone A) and 99.56% (6,833,625/6,864,130) clinically acceptable (zones A and B), whereas 60-min predictions were 70.56% (4,843,605/6,864,130) clinically accurate and 97.49% (6,692,165/6,864,130) clinically acceptable. By analyzing daily use statistics and CGM data for the 280 most long-standing users of Diabits, it was established that under free-living conditions, many common blood glucose control metrics improved with increased frequency of app use. For instance, the average blood glucose for the days these users did not interact with the app was 154.0 (SD 47.2) mg/dL, with 67.52% of the time spent in the healthy 70 to 180 mg/dL range. For days with 10 or more Diabits sessions, the average blood glucose decreased to 141.6 (SD 42.0) mg/dL (P<.001), whereas the time in euglycemic range increased to 74.28% (P<.001). On the Ohio T1DM data set of 6 patients with type 1 diabetes, 30-min predictions of the base Diabits model had an average root mean square error of 18.68 (SD 2.19) mg/dL, which is an improvement over the published state-of-the-art results for this data set.

CONCLUSIONS : Diabits accurately predicts future glycemic fluctuations, potentially making it easier for patients with diabetes to maintain their blood glucose in the reference range. Furthermore, an improvement in glucose control was observed on days with more frequent Diabits use.

Kriventsov Stan, Lindsey Alexander, Hayeri Amir

2020-Sep-22

artificial intelligence, blood glucose predictions, digital health, machine learning, mobile phone, type 1 diabetes