Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Surgery Surgery

Image-Based Cell Profiling Enables Quantitative Tissue Microscopy in Gastroenterology.

In Cytometry. Part A : the journal of the International Society for Analytical Cytology

Immunofluorescence microscopy is an essential tool for tissue-based research, yet data reporting is almost always qualitative. Quantification of images, at the per-cell level, enables "flow cytometry-type" analyses with intact locational data but achieving this is complex. Gastrointestinal tissue, for example, is highly diverse: from mixed-cell epithelial layers through to discrete lymphoid patches. Moreover, different species (e.g., rat, mouse, and humans) and tissue preparations (paraffin/frozen) are all commonly studied. Here, using field-relevant examples, we develop open, user-friendly methodology that can encompass these variables to provide quantitative tissue microscopy for the field. Antibody-independent cell labeling approaches, compatible across preparation types and species, were optimized. Per-cell data were extracted from routine confocal micrographs, with semantic machine learning employed to tackle densely packed lymphoid tissues. Data analysis was achieved by flow cytometry-type analyses alongside visualization and statistical definition of cell locations, interactions and established microenvironments. First, quantification of Escherichia coli passage into human small bowel tissue, following Ussing chamber incubations exemplified objective quantification of rare events in the context of lumen-tissue crosstalk. Second, in rat jejenum, precise histological context revealed distinct populations of intraepithelial lymphocytes between and directly below enterocytes enabling quantification in context of total epithelial cell numbers. Finally, mouse mononuclear phagocyte-T cell interactions, cell expression and significant spatial cell congregations were mapped to shed light on cell-cell communication in lymphoid Peyer's patch. Accessible, quantitative tissue microscopy provides a new window-of-insight to diverse questions in gastroenterology. It can also help combat some of the data reproducibility crisis associated with antibody technologies and over-reliance on qualitative microscopy. © 2020 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry.

Wills John W, Robertson Jack, Summers Huw D, Miniter Michelle, Barnes Claire, Hewitt Rachel E, Keita Åsa V, Söderholm Johan D, Rees Paul, Powell Jonathan J


**cell segmentation, confocal microscopy, immunofluorescence, intestinal tissue, machine learning, processing tilescans in CellProfiler Getis-Ord spatial statistics**

General General

Artificial intelligence models versus empirical equations for modeling monthly reference evapotranspiration.

In Environmental science and pollution research international

Accurate estimation of reference evapotranspiration (ETo) is profoundly crucial in crop modeling, sustainable management, hydrological water simulation, and irrigation scheduling, since it accounts for more than two-thirds of global precipitation losses. Therefore, ETo-based estimation is a major concern in the hydrological cycle. The estimation of ETo can be determined using various methods, including field measurement (the scale of the lysimeter), experimental methods, and mathematical equations. The Food and Agriculture Organization recommended the Penman-Monteith (FAO-56 PM) method which was identified as the standard method of ETo estimation. However, this equation requires a large number of measured climatic data (maximum and minimum air temperature, relative humidity, solar radiation, and wind speed) that are not always available on meteorological stations. Over the decade, the artificial intelligence (AI) models have received more attention for estimating ETo on multi-time scales. This research explores the potential of new hybrid AI model, i.e., support vector regression (SVR) integrated with grey wolf optimizer (SVR-GWO) for estimating monthly ETo at Algiers, Tlemcen, and Annaba stations located in the north of Algeria. Five climatic variables namely relative humidity (RH), maximum and minimum air temperatures (Tmax and Tmin), solar radiation (Rs), and wind speed (Us) were used for model construction and evaluation. The proposed hybrid SVR-GWO model was compared against hybrid SVR-genetic algorithm (SVR-GA), SVR-particle swarm optimizer (SVR-PSO), conventional artificial neural network (ANN), and empirical (Turc, Ritchie, Thornthwaite, and three versions of Valiantzas methods) models by using root mean squared error (RMSE), Nash-Sutcliffe efficiency (NSE), Pearson correlation coefficient (PCC), and Willmott index (WI), and through graphical interpretation. Through the results obtained, the performance of the SVR-GWO provides very promising and occasionally competitive results compared to other data-driven and empirical methods at study stations. Thus, the proposed SVR-GWO model with five climatic input variables outperformed the other models (RMSE = 0.0776/0.0613/0.0374 mm, NSE = 0.9953/ 0.9990/0.9995, PCC = 0.9978/0.9995/0.9998 and WI = 0.9988/0.9997/0.9999) for estimating ETo at Algiers, Tlemcen, and Annaba stations, respectively. In conclusion, the results of this research indicate the suitability of the proposed hybrid artificial intelligence model (SVR-GWO) at the study stations. Besides, promising results encourage researchers to transfer and test these models in other locations in the world in future works.

Tikhamarine Yazid, Malik Anurag, Souag-Gamane Doudja, Kisi Ozgur


Algeria, Empirical methods, Hybrid AI models, Metaheuristic algorithms, Reference evapotranspiration

Surgery Surgery

The effects of different levels of realism on the training of CNNs with only synthetic images for the semantic segmentation of robotic instruments in a head phantom.

In International journal of computer assisted radiology and surgery

PURPOSE : The manual generation of training data for the semantic segmentation of medical images using deep neural networks is a time-consuming and error-prone task. In this paper, we investigate the effect of different levels of realism on the training of deep neural networks for semantic segmentation of robotic instruments. An interactive virtual-reality environment was developed to generate synthetic images for robot-aided endoscopic surgery. In contrast with earlier works, we use physically based rendering for increased realism.

METHODS : Using a virtual reality simulator that replicates our robotic setup, three synthetic image databases with an increasing level of realism were generated: flat, basic, and realistic (using the physically-based rendering). Each of those databases was used to train 20 instances of a UNet-based semantic-segmentation deep-learning model. The networks trained with only synthetic images were evaluated on the segmentation of 160 endoscopic images of a phantom. The networks were compared using the Dwass-Steel-Critchlow-Fligner nonparametric test.

RESULTS : Our results show that the levels of realism increased the mean intersection-over-union (mIoU) of the networks on endoscopic images of a phantom ([Formula: see text]). The median mIoU values were 0.235 for the flat dataset, 0.458 for the basic, and 0.729 for the realistic. All the networks trained with synthetic images outperformed naive classifiers. Moreover, in an ablation study, we show that the mIoU of physically based rendering is superior to texture mapping ([Formula: see text]) of the instrument (0.606), the background (0.685), and the background and instruments combined (0.672).

CONCLUSIONS : Using physical-based rendering to generate synthetic images is an effective approach to improve the training of neural networks for the semantic segmentation of surgical instruments in endoscopic images. Our results show that this strategy can be an essential step in the broad applicability of deep neural networks in semantic segmentation tasks and help bridge the domain gap in machine learning.

Heredia Perez Saul Alexis, Marques Marinho Murilo, Harada Kanako, Mitsuishi Mamoru


Deep learning, Photorealistic rendering, Semantic segmentation

Surgery Surgery

Spatio-temporal deep learning methods for motion estimation using 4D OCT image data.

In International journal of computer assisted radiology and surgery

PURPOSE : Localizing structures and estimating the motion of a specific target region are common problems for navigation during surgical interventions. Optical coherence tomography (OCT) is an imaging modality with a high spatial and temporal resolution that has been used for intraoperative imaging and also for motion estimation, for example, in the context of ophthalmic surgery or cochleostomy. Recently, motion estimation between a template and a moving OCT image has been studied with deep learning methods to overcome the shortcomings of conventional, feature-based methods.

METHODS : We investigate whether using a temporal stream of OCT image volumes can improve deep learning-based motion estimation performance. For this purpose, we design and evaluate several 3D and 4D deep learning methods and we propose a new deep learning approach. Also, we propose a temporal regularization strategy at the model output.

RESULTS : Using a tissue dataset without additional markers, our deep learning methods using 4D data outperform previous approaches. The best performing 4D architecture achieves an correlation coefficient (aCC) of 98.58% compared to 85.0% of a previous 3D deep learning method. Also, our temporal regularization strategy at the output further improves 4D model performance to an aCC of 99.06%. In particular, our 4D method works well for larger motion and is robust toward image rotations and motion distortions.

CONCLUSIONS : We propose 4D spatio-temporal deep learning for OCT-based motion estimation. On a tissue dataset, we find that using 4D information for the model input improves performance while maintaining reasonable inference times. Our regularization strategy demonstrates that additional temporal information is also beneficial at the model output.

Bengs Marcel, Gessert Nils, Schlüter Matthias, Schlaefer Alexander


4D deep learning, Motion estimation, Optical coherence tomography, Regularization

Radiology Radiology

Intensive Care Risk Estimation in COVID-19 Pneumonia Based on Clinical and Imaging Parameters: Experiences from the Munich Cohort.

In Journal of clinical medicine

The evolving dynamics of coronavirus disease 2019 (COVID-19) and the increasing infection numbers require diagnostic tools to identify patients at high risk for a severe disease course. Here we evaluate clinical and imaging parameters for estimating the need of intensive care unit (ICU) treatment. We collected clinical, laboratory and imaging data from 65 patients with confirmed COVID-19 infection based on polymerase chain reaction (PCR) testing. Two radiologists evaluated the severity of findings in computed tomography (CT) images on a scale from 1 (no characteristic signs of COVID-19) to 5 (confluent ground glass opacities in over 50% of the lung parenchyma). The volume of affected lung was quantified using commercially available software. Machine learning modelling was performed to estimate the risk for ICU treatment. Patients with a severe course of COVID-19 had significantly increased interleukin (IL)-6, C-reactive protein (CRP), and leukocyte counts and significantly decreased lymphocyte counts. The radiological severity grading was significantly increased in ICU patients. Multivariate random forest modelling showed a mean ± standard deviation sensitivity, specificity and accuracy of 0.72 ± 0.1, 0.86 ± 0.16 and 0.80 ± 0.1 and a receiver operating characteristic-area under curve (ROC-AUC) of 0.79 ± 0.1. The need for ICU treatment is independently associated with affected lung volume, radiological severity score, CRP, and IL-6.

Burian Egon, Jungmann Friederike, Kaissis Georgios A, Lohöfer Fabian K, Spinner Christoph D, Lahmer Tobias, Treiber Matthias, Dommasch Michael, Schneider Gerhard, Geisler Fabian, Huber Wolfgang, Protzer Ulrike, Schmid Roland M, Schwaiger Markus, Makowski Marcus R, Braren Rickmer F


COVID-19, clinical parameters, computed tomography, intensive care unit, radiological parameters, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)

Radiology Radiology

A Fully Automatic Deep Learning System for COVID-19 Diagnostic and Prognostic Analysis.

In The European respiratory journal

Coronavirus disease 2019 (COVID-19) has spread globally, and medical resources become insufficient in many regions. Fast diagnosis of COVID-19, and finding high-risk patients with worse prognosis for early prevention and medical resources optimisation is important. Here, we proposed a fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis by routinely used computed tomography.We retrospectively collected 5372 patients with computed tomography images from 7 cities or provinces. Firstly, 4106 patients with computed tomography images were used to pre-train the DL system, making it learn lung features. Afterwards, 1266 patients (924 with COVID-19, and 471 had follow-up for 5+ days; 342 with other pneumonia) from 6 cities or provinces were enrolled to train and externally validate the performance of the deep learning system.In the 4 external validation sets, the deep learning system achieved good performance in identifying COVID-19 from other pneumonia (AUC=0.87 and 0.88) and viral pneumonia (AUC=0.86). Moreover, the deep learning system succeeded to stratify patients into high-risk and low-risk groups whose hospital-stay time have significant difference (p=0.013 and 0.014). Without human-assistance, the deep learning system automatically focused on abnormal areas that showed consistent characteristics with reported radiological findings.Deep learning provides a convenient tool for fast screening COVID-19 and finding potential high-risk patients, which may be helpful for medical resource optimisation and early prevention before patients show severe symptoms.

Wang Shuo, Zha Yunfei, Li Weimin, Wu Qingxia, Li Xiaohu, Niu Meng, Wang Meiyun, Qiu Xiaoming, Li Hongjun, Yu He, Gong Wei, Bai Yan, Li Li, Zhu Yongbei, Wang Liusu, Tian Jie