Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Pathology Pathology

HookNet: Multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images.

In Medical image analysis

We propose HookNet, a semantic segmentation model for histopathology whole-slide images, which combines context and details via multiple branches of encoder-decoder convolutional neural networks. Concentric patches at multiple resolutions with different fields of view, feed different branches of HookNet, and intermediate representations are combined via a hooking mechanism. We describe a framework to design and train HookNet for achieving high-resolution semantic segmentation and introduce constraints to guarantee pixel-wise alignment in feature maps during hooking. We show the advantages of using HookNet in two histopathology image segmentation tasks where tissue type prediction accuracy strongly depends on contextual information, namely (1) multi-class tissue segmentation in breast cancer and, (2) segmentation of tertiary lymphoid structures and germinal centers in lung cancer. We show the superiority of HookNet when compared with single-resolution U-Net models working at different resolutions as well as with a recently published multi-resolution model for histopathology image segmentation. We have made HookNet publicly available by releasing the source code1 as well as in the form of web-based applications2,3 based on the platform.

van Rijthoven Mart, Balkenhol Maschenka, Siliņa Karina, van der Laak Jeroen, Ciompi Francesco


Computational pathology, Deep learning, Multi-resolution, Semantic segmentation

General General

Automatic fundus image quality assessment on a continuous scale.

In Computers in biology and medicine

Fundus photography is commonly used for screening, diagnosis, and monitoring of various diseases affecting the eye. In addition, it has shown promise in the diagnosis of brain diseases and evaluation of cardiovascular risk factors. Good image quality is important if diagnosis is to be accurate and timely. Here, we propose a method that automatically grades image quality on a continuous scale which is more flexible than binary quality classification. The method utilizes random forest regression models trained on image features discovered automatically by combining basic image filters using simulated annealing as well as features extracted with the discrete Fourier transform. The method was developed and tested on images from two different fundus camera models. The quality of those images was rated on a continuous scale from 0.0 to 1.0 by five experts. In addition, the method was tested on DRIMDB, a publicly available dataset with binary quality ratings. On the DRIMDB dataset the method achieves an accuracy of 0.981, sensitivity of 0.993 and specificity of 0.958 which is consistent with the state of the art. When evaluating image quality on a continuous scale the method outperforms human raters.

Karlsson Robert A, Jonsson Benedikt A, Hardarson Sveinn H, Olafsdottir Olof B, Halldorsson Gisli H, Stefansson Einar


Fundus image quality assessment, Fundus imaging, Machine learning, Simulated annealing

General General

Deep learning segmentation of Primary Sjögren's syndrome affected salivary glands from ultrasonography images.

In Computers in biology and medicine

Salivary gland ultrasonography (SGUS) has proven to be a promising tool for diagnosing various diseases manifesting with abnormalities in salivary glands (SGs), including primary Sjögren's syndrome (pSS). At present, the major obstacle for establishing SUGS as a standardized tool for pSS diagnosis is its low inter/intra observer reliability. The aim of this study was to address this problem by proposing a robust deep learning-based solution for the automated segmentation of SGUS images. For these purposes, four architectures were considered: a fully convolutional neural network, fully convolutional "DenseNets" (FCN-DenseNet) network, U-Net, and LinkNet. During the course of the study, the growing HarmonicSS cohort included 1184 annotated SGUS images. Accordingly, the algorithms were trained using a transfer learning approach. With regard to the intersection-over-union (IoU), the top-performing FCN-DenseNet (IoU = 0.85) network showed a considerable margin above the inter-observer agreement (IoU = 0.76) and slightly above the intra-observer agreement (IoU = 0.84) between clinical experts. Considering its accuracy and speed (24.5 frames per second), it was concluded that the FCN-DenseNet could have wider applications in clinical practice. Further work on the topic will consider the integration of methods for pSS scoring, with the end goal of establishing SGUS as an effective noninvasive pSS diagnostic tool. To aid this progress, we created inference (frozen models) files for the developed models, and made them publicly available.

Vukicevic Arso M, Radovic Milos, Zabotti Alen, Milic Vera, Hocevar Alojzija, Callegher Sara Zandonella, De Lucia Orazio, De Vita Salvatore, Filipovic Nenad


Deep learning, HarmonicSS project, Salivary glands, Segmentation, “Sjögrens syndrome”

Radiology Radiology

Artificial intelligence assistance improves reporting efficiency of thoracic aortic aneurysm CT follow-up.

In European journal of radiology ; h5-index 47.0

OBJECTIVE : Follow-up of aortic aneurysms by computed tomography (CT) is crucial to balance the risks of treatment and rupture. Artificial intelligence (AI)-assisted radiology reporting promises time savings and reduced inter-reader variabilities.

METHODS : The influence of AI assistance on the efficiency and accuracy of aortic aneurysm reporting according to the AHA / ESC guidelines was quantified based on 324 AI measurements and 1944 radiological measurements: 18 aortic aneurysm patients, each with two CT scans (arterial contrast phase, electrocardiogram-gated) with an interval of at least six months have been included. One board-certified radiologist and two residents (8/4/2 years of experience in vascular imaging) independently assessed aortic diameters at nine landmark positions. Aneurysm extensions were compared with original CT reports. After three weeks washout period, CTs were re-assessed, based on graphically illustrated AI measurements.

RESULTS : Time-consuming guideline-compliant aortic measurements revealed additional affections of the root / arch for 80 % of aneurysms that had initially been reported to be limited to the ascending aorta. AI assistance reduced mean reporting time by 63 % from 13:01 to 04:46 min including manual corrections of AI measurements (performed for 33.6 % of all measurements with predominance at the sinuses of Vasalva). AI assistance reduced total diameter inter-reader variability by 42.5 % (0.42 / 1.16 mm with / without AI assistance, mean of all patients and landmark positions, significant reduction for 6 out of 9 measuring positions). Conventional and AI-assisted quantification aneurysm progress varied to small extent (mean of 0.75 mm over all patients / landmark positions) not significantly exceeding radiologist's inter-reader variabilities.

CONCLUSIONS : Guideline-compliant aorta measurement is crucial to report detailed aneurysm extension which might affect the strategy of interventional repair. AI assistance promises improved reporting efficiency and has high potential to reduce radiologist's inter-reader variabilities that can hamper diagnostic follow-up accuracy.

KEY POINT : The time-consuming guideline-compliant aorta aneurysm assessment is crucial to report aneurysm extension in detail; AI-assisted measurement reduces reporting time, improves extension evaluation and reduces inter-reader variability.

Rueckel J, Reidler P, Fink N, Geyer Thomas, Fabritius M P, Sperl J, Ricke J, Ingrisch M, Sabel B O


Aneurysm, Aorta, Artificial intelligence, Tomography

Radiology Radiology

Meningioma consistency can be defined by combining the radiomic features of magnetic resonance imaging and ultrasound elastography. A pilot study using machine learning classifiers.

In World neurosurgery ; h5-index 47.0

BACKGROUND : The consistency of meningioma is a factor that may influence surgical planning and the extent of resection. The aim of our study is to develop a predictive model of tumor consistency using the radiomic features of preoperative magnetic resonance imaging (MRI) and the tumor elasticity measured by intraoperative ultrasound elastography (IOUS-E) as a reference parameter.

METHODS : A retrospective analysis was performed on supratentorial meningiomas that were operated on between March 2018 and July 2020. Cases with IOUS-E studies were included. A semi-quantitative analysis of elastograms was used to define the meningioma consistency. MRIs were pre-processed before extracting radiomic features. Predictive models were built using a combination of feature selection filters and machine learning algorithms: logistic regression (LR), Naive Bayes, k-nearest neighbors (kNN), Random Forest (RF), Support Vector Machine (SVM), and Neural Network (NN). A stratified 5-fold cross-validation was performed. Then, models were evaluated using the area under the curve (AUC) and classification accuracy (CA).

RESULTS : Eighteen patients were available for analysis. Meningiomas were classified as hard or soft according to a mean tissue elasticity (MTE) threshold of 120. The best-ranked radiomic features were obtained from T1-weighted post-contrast (T1WC), Apparent Diffusion Coefficient (ADC) map, and T2-weighted (T2W) images. The combination of Information Gain and ReliefF filters with the NB algorithm resulted in an AUC of 0.961 and CA of 94%.

CONCLUSION : We have developed a high-precision classification model that is capable of predicting consistency of meningiomas based on the radiomic features in preoperative MRI (T2W, T1WC, and ADC map).

Arrese Ignacio, García-García Sergio, Velasco-Casares María, Escudero-Caro Trinidad, Zamora Tomás, Sarabia Rosario


MRI, brain tumor, elastography, intraoperative ultrasound, machine learning., meningiomas

General General

A Review on Deep Learning Approaches in Healthcare Systems: Taxonomies, Challenges, and Open Issues.

In Journal of biomedical informatics ; h5-index 55.0

In the last few years, the application of Machine Learning approaches like Deep Neural Network (DNN) models have become more attractive in the healthcare system given the rising complexity of the healthcare data. Machine Learning (ML) algorithms provide efficient and effective data analysis models to uncover hidden patterns and other meaningful information from the considerable amount of health data that conventional analytics are not able to discover in a reasonable time. In particular, Deep Learning (DL) techniques have been shown as promising methods in pattern recognition in the healthcare systems. Motivated by this consideration, the contribution of this paper is to investigate the deep learning approaches applied to healthcare systems by reviewing the cutting-edge network architectures, applications, and industrial trends. The goal is first to provide extensive insight into the application of deep learning models in healthcare solutions to bridge deep learning techniques and human healthcare interpretability. And then, to present the existing open challenges and future directions.

Shamshirband Shahab, Fathi Mahdis, Dehzangi Abdollah, Theodore Chronopoulos Anthony, Alinejad-Rokny Hamid


Deep Neural Network, Diagnostics tools, Health data analytics, Healthcare applications, Machine Learning