Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Radiology Radiology

Resting-State Brain Activity for Early Prediction Outcome in Postanoxic Patients in a Coma with Indeterminate Clinical Prognosis.

In AJNR. American journal of neuroradiology

BACKGROUND AND PURPOSE : Early outcome prediction of postanoxic patients in a coma after cardiac arrest proves challenging. Current prognostication relies on multimodal testing, using clinical examination, electrophysiologic testing, biomarkers, and structural MR imaging. While this multimodal prognostication is accurate for predicting poor outcome (ie, death), it is not sensitive enough to identify good outcome (ie, consciousness recovery), thus leaving many patients with indeterminate prognosis. We specifically assessed whether resting-state fMRI provides prognostic information, notably in postanoxic patients in a coma with indeterminate prognosis early after cardiac arrest, specifically for good outcome.

MATERIALS AND METHODS : We used resting-state fMRI in a prospective study to compare whole-brain functional connectivity between patients with good and poor outcomes, implementing support vector machine learning. Then, we automatically predicted coma outcome using resting-state fMRI and also compared the prediction based on resting-state fMRI with the outcome prediction based on DWI.

RESULTS : Of 17 eligible patients who completed the study procedure (among 351 patients screened), 9 regained consciousness and 8 remained comatose. We found higher functional connectivity in patients recovering consciousness, with greater changes occurring within and between the occipitoparietal and temporofrontal regions. Coma outcome prognostication based on resting-state fMRI machine learning was very accurate, notably for identifying patients with good outcome (accuracy, 94.4%; area under the receiver operating curve, 0.94). Outcome predictors using resting-state fMRI performed significantly better (P < .05) than DWI (accuracy, 60.0%; area under the receiver operating curve, 0.63).

CONCLUSIONS : Indeterminate prognosis might lead to major clinical uncertainty and significant variations in life-sustaining treatments. Resting-state fMRI might bridge the gap left in early prognostication of postanoxic patients in a coma by identifying those with both good and poor outcomes.

Pugin D, Hofmeister J, Gasche Y, Vulliemoz S, Lövblad K-O, Van De Ville D, Haller S

2020-May-21

Ophthalmology Ophthalmology

Fully Automated Segmentation of Globes for Volume Quantification in CT Images of Orbits using Deep Learning.

In AJNR. American journal of neuroradiology

BACKGROUND AND PURPOSE : Fast and accurate quantification of globe volumes in the event of an ocular trauma can provide clinicians with valuable diagnostic information. In this work, an automated workflow using a deep learning-based convolutional neural network is proposed for prediction of globe contours and their subsequent volume quantification in CT images of the orbits.

MATERIALS AND METHODS : An automated workflow using a deep learning -based convolutional neural network is proposed for prediction of globe contours in CT images of the orbits. The network, 2D Modified Residual UNET (MRes-UNET2D), was trained on axial CT images from 80 subjects with no imaging or clinical findings of globe injuries. The predicted globe contours and volume estimates were compared with manual annotations by experienced observers on 2 different test cohorts.

RESULTS : On the first test cohort (n = 18), the average Dice, precision, and recall scores were 0.95, 96%, and 95%, respectively. The average 95% Hausdorff distance was only 1.5 mm, with a 5.3% error in globe volume estimates. No statistically significant differences (P = .72) were observed in the median globe volume estimates from our model and the ground truth. On the second test cohort (n = 9) in which a neuroradiologist and 2 residents independently marked the globe contours, MRes-UNET2D (Dice = 0.95) approached human interobserver variability (Dice = 0.94). We also demonstrated the utility of inter-globe volume difference as a quantitative marker for trauma in 3 subjects with known globe injuries.

CONCLUSIONS : We showed that with fast prediction times, we can reliably detect and quantify globe volumes in CT images of the orbits across a variety of acquisition parameters.

Umapathy L, Winegar B, MacKinnon L, Hill M, Altbach M I, Miller J M, Bilgin A

2020-May-21

Surgery Surgery

Spatial immune profiling of the colorectal tumor microenvironment predicts good outcome in stage II patients.

In NPJ digital medicine

Cellular subpopulations within the colorectal tumor microenvironment (TME) include CD3+ and CD8+ lymphocytes, CD68+ and CD163+ macrophages, and tumor buds (TBs), all of which have known prognostic significance in stage II colorectal cancer. However, the prognostic relevance of their spatial interactions remains unknown. Here, by applying automated image analysis and machine learning approaches, we evaluate the prognostic significance of these cellular subpopulations and their spatial interactions. Resultant data, from a training cohort retrospectively collated from Edinburgh, UK hospitals (n = 113), were used to create a combinatorial prognostic model, which identified a subpopulation of patients who exhibit 100% survival over a 5-year follow-up period. The combinatorial model integrated lymphocytic infiltration, the number of lymphocytes within 50-μm proximity to TBs, and the CD68+/CD163+ macrophage ratio. This finding was confirmed on an independent validation cohort, which included patients treated in Japan and Scotland (n = 117). This work shows that by analyzing multiple cellular subpopulations from the complex TME, it is possible to identify patients for whom surgical resection alone may be curative.

Nearchou Ines P, Gwyther Bethany M, Georgiakakis Elena C T, Gavriel Christos G, Lillard Kate, Kajiwara Yoshiki, Ueno Hideki, Harrison David J, Caie Peter D

2020

Cancer microenvironment, Computational biology and bioinformatics

Dermatology Dermatology

Effects of Label Noise on Deep Learning-Based Skin Cancer Classification.

In Frontiers in medicine

Recent studies have shown that deep learning is capable of classifying dermatoscopic images at least as well as dermatologists. However, many studies in skin cancer classification utilize non-biopsy-verified training images. This imperfect ground truth introduces a systematic error, but the effects on classifier performance are currently unknown. Here, we systematically examine the effects of label noise by training and evaluating convolutional neural networks (CNN) with 804 images of melanoma and nevi labeled either by dermatologists or by biopsy. The CNNs are evaluated on a test set of 384 images by means of 4-fold cross validation comparing the outputs with either the corresponding dermatological or the biopsy-verified diagnosis. With identical ground truths of training and test labels, high accuracies with 75.03% (95% CI: 74.39-75.66%) for dermatological and 73.80% (95% CI: 73.10-74.51%) for biopsy-verified labels can be achieved. However, if the CNN is trained and tested with different ground truths, accuracy drops significantly to 64.53% (95% CI: 63.12-65.94%, p < 0.01) on a non-biopsy-verified and to 64.24% (95% CI: 62.66-65.83%, p < 0.01) on a biopsy-verified test set. In conclusion, deep learning methods for skin cancer classification are highly sensitive to label noise and future work should use biopsy-verified training images to mitigate this problem.

Hekler Achim, Kather Jakob N, Krieghoff-Henning Eva, Utikal Jochen S, Meier Friedegund, Gellrich Frank F, Upmeier Zu Belzen Julius, French Lars, Schlager Justin G, Ghoreschi Kamran, Wilhelm Tabea, Kutzner Heinz, Berking Carola, Heppt Markus V, Haferkamp Sebastian, Sondermann Wiebke, Schadendorf Dirk, Schilling Bastian, Izar Benjamin, Maron Roman, Schmitt Max, Fröhling Stefan, Lipka Daniel B, Brinker Titus J

2020

artificial intelligence, dermatology, label noise, melanoma, nevi, skin cancer

General General

Using the force: STEM knowledge and experience construct shared neural representations of engineering concepts.

In NPJ science of learning

How does STEM knowledge learned in school change students' brains? Using fMRI, we presented photographs of real-world structures to engineering students with classroom-based knowledge and hands-on lab experience, examining how their brain activity differentiated them from their "novice" peers not pursuing engineering degrees. A data-driven MVPA and machine-learning approach revealed that neural response patterns of engineering students were convergent with each other and distinct from novices' when considering physical forces acting on the structures. Furthermore, informational network analysis demonstrated that the distinct neural response patterns of engineering students reflected relevant concept knowledge: learned categories of mechanical structures. Information about mechanical categories was predominantly represented in bilateral anterior ventral occipitotemporal regions. Importantly, mechanical categories were not explicitly referenced in the experiment, nor does visual similarity between stimuli account for mechanical category distinctions. The results demonstrate how learning abstract STEM concepts in the classroom influences neural representations of objects in the world.

Cetron Joshua S, Connolly Andrew C, Diamond Solomon G, May Vicki V, Haxby James V, Kraemer David J M

2020

Human behaviour, Learning and memory

General General

Non-destructive estimation of field maize biomass using terrestrial lidar: an evaluation from plot level to individual leaf level.

In Plant methods

Background : Precision agriculture is an emerging research field that relies on monitoring and managing field variability in phenotypic traits. An important phenotypic trait is biomass, a comprehensive indicator that can reflect crop yields. However, non-destructive biomass estimation at fine levels is unknown and challenging due to the lack of accurate and high-throughput phenotypic data and algorithms.

Results : In this study, we evaluated the capability of terrestrial light detection and ranging (lidar) data in estimating field maize biomass at the plot, individual plant, leaf group, and individual organ (i.e., individual leaf or stem) levels. The terrestrial lidar data of 59 maize plots with more than 1000 maize plants were collected and used to calculate phenotypes through a deep learning-based pipeline, which were then used to predict maize biomass through simple regression (SR), stepwise multiple regression (SMR), artificial neural network (ANN), and random forest (RF). The results showed that terrestrial lidar data were useful for estimating maize biomass at all levels (at each level, R2 was greater than 0.80), and biomass estimation at leaf group level was the most precise (R2 = 0.97, RMSE = 2.22 g) among all four levels. All four regression techniques performed similarly at all levels. However, considering the transferability and interpretability of the model itself, SR is the suggested method for estimating maize biomass from terrestrial lidar-derived phenotypes. Moreover, height-related variables showed to be the most important and robust variables for predicting maize biomass from terrestrial lidar at all levels, and some two-dimensional variables (e.g., leaf area) and three-dimensional variables (e.g., volume) showed great potential as well.

Conclusion : We believe that this study is a unique effort on evaluating the capability of terrestrial lidar on estimating maize biomass at difference levels, and can provide a useful resource for the selection of the phenotypes and models required to estimate maize biomass in precision agriculture practices.

Jin Shichao, Su Yanjun, Song Shilin, Xu Kexin, Hu Tianyu, Yang Qiuli, Wu Fangfang, Xu Guangcai, Ma Qin, Guan Hongcan, Pang Shuxin, Li Yumei, Guo Qinghua

2020

Biomass, Machine learning, Phenotype, Precision agriculture, Terrestrial lidar