Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Pathology Pathology

A deep learning model for lymph node metastasis prediction based on digital histopathological images of primary endometrial cancer.

In Quantitative imaging in medicine and surgery

BACKGROUND : The current study aimed to develop a deep learning (DL) model for prediction of lymph node metastasis (LNM) based on hematoxylin and eosin (HE)-stained histopathological images of endometrial cancer (EC). The model was validated using external data.

METHODS : A total of 2,104 whole slide image (WSI) from 564 patients with pathologically confirmed LNM status were collated from West China Second University Hospital. An artificial intelligence (AI) model was built on the multiple instance-learning (MIL) framework for automatic prediction of the probability of LNM and its performance compared with "Mayo criteria". An additional external data source comprising 533 WSI was collected from two independent medical institutions to validate the model's robustness. Heatmaps were generated to demonstrate regions of the WSI that made the greatest contributions to the DL network output to improve understanding of these processes.

RESULTS : The proposed MIL model achieved an area under the curve (AUC) of 0.938, a sensitivity of 0.830 and a specificity of 0.911 for LNM prediction to EC. The AUC according to Mayo criteria was 0.666 for the same test dataset. For types I, II and mixed EC, AUCs were 0.927, 0.979 and 0.929, respectively. The predictive performance of the MIL model also achieved an AUC of 0.921 for early staging. In external validation data, the proposed model achieved an AUC of 0.770, a sensitivity of 0.814 and a specificity of 0.520 for LNM prediction. AUCs were 0.783 for type I and 0.818 for early stage EC.

CONCLUSIONS : The proposed MIL model generated from histopathological images of EC has a much better LNM predictive performance than that of Mayo criteria. A novel DL-based biomarker trained on different histological subtypes of EC slides was revealed to predict metastatic status with improved accuracy, especially for early staging patients. The current study proves the concept of MIL-based prediction of LNM in EC for the first time, and brought a new sight to improve the accuracy of LNM prediction. Multicenter prospective validation data is required to further confirm the clinical utility.

Feng Min, Zhao Yu, Chen Jie, Zhao Tingyu, Mei Juan, Fan Yingying, Lin Zhenyu, Yao Jianhua, Bu Hong

2023-Mar-01

Endometrial cancer (EC), deep learning model, lymph node metastasis (LNM), prediction

Radiology Radiology

The value of using a deep learning image reconstruction algorithm of thinner slice thickness to balance the image noise and spatial resolution in low-dose abdominal CT.

In Quantitative imaging in medicine and surgery

BACKGROUND : Traditional reconstruction techniques have certain limitations in balancing image quality and reducing radiation dose. The deep learning image reconstruction (DLIR) algorithm opens the door to a new era of medical image reconstruction. The purpose of the study was to evaluate the DLIR images at 1.25 mm thickness in balancing image noise and spatial resolution in low-dose abdominal computed tomography (CT) in comparison with the conventional adaptive statistical iterative reconstruction-V at 40% strength (ASIR-V40%) at 5 and 1.25 mm.

METHODS : This retrospective study included 89 patients who underwent low-dose abdominal CT. Five sets of images were generated using ASIR-V40% at a 5 mm slice thickness and 1.25 mm (high-resolution) with DLIR at 1.25 mm using 3 strengths: low (DLIR-L), medium (DLIR-M), and high (DLIR-H). Qualitative evaluation was performed for image noise, artifacts, and visualization of small structures, while quantitative evaluation was performed for standard deviation (SD), signal-to-noise ratio (SNR), and spatial resolution (defined as the edge rising slope).

RESULTS : At 1.25 mm, DLIR-M and DLIR-H images had significantly lower noise (SD in fat: 14.29±3.37 and 9.65±3.44 HU, respectively), higher SNR for liver (3.70±0.78 and 5.64±1.20, respectively), and higher overall image quality (4.30±0.44 and 4.67±0.40, respectively) than did the respective values in ASIR-V40% images (20.60±4.04 HU, 2.60±0.63, and 3.77±0.43; all P values <0.05). Compared with the 5 mm ASIR-V40% images, the 1.25 mm DLIR-H images had lower noise (SD: 9.65±3.44 vs. 13.63±10.03 HU), higher SNR (5.64±1.20 vs. 4.69±1.28), and higher overall image quality scores (4.67±0.40 vs. 3.94±0.46) (all P values <0.001). In addition, DLIR-L, DLIR-M, and DLIR-H images had a significantly higher spatial resolution in terms of edge rising slope (59.66±21.46, 58.52±17.48, and 59.26±13.33, respectively, vs. 33.79±9.23) and significantly higher image quality scores in the visualization of fine structures (4.43±0.50, 4.41±0.49, and 4.38±0.49, respectively vs. 2.62±0.49) than did the 5 mm ASIR-V40 images.

CONCLUSIONS : The 1.25 mm DLIR-M and DLIR-H images had significantly reduced image noise and improved SNR and overall image quality compared to the 1.25 mm ASIR-V40% images, and they had significantly improved the spatial resolution and visualization of fine structures compared to the 5 mm ASIR-V40% images. DLIR-H images had further reduced image noise compared with the 5 mm ASIR-V40% images, and DLIR-H was the most effective technique at balancing the image noise and spatial resolution in low-dose abdominal CT.

Wang Huan, Li Xinyu, Wang Tianze, Li Jianying, Sun Tianze, Chen Lihong, Cheng Yannan, Jia Xiaoqian, Niu Xinyi, Guo Jianxin

2023-Mar-01

Deep learning, different layer thickness, image reconstruction, radiation dose

General General

Novel estimation technique for the carrier-to-noise ratio of wireless medical telemetry using software-defined radio with machine-learning.

In Scientific reports ; h5-index 158.0

In this study, we developed a novel machine-learning model to estimate the carrier-to-noise ratio (CNR) of wireless medical telemetry (WMT) using time-domain waveform data measured by a low-cost software-defined radio. With automatic estimation of CNR, the management of the electromagnetic environment of WMT can be made easier. Therefore, we proposed a machine-learning method for estimating CNR. According to the performance evaluation results by 5-segment cross-validation on 704 types of measured data, CNR was estimated with 99.5% R-square and 0.844 dB mean absolute error using a gradient boosting regression tree. The gradient boosting decision tree classifiers predicted if the CNR exceeded 30 dB with 99.5% accuracy. The proposed method is effective for investigating electromagnetic environments in clinical settings.

Kai Ishida

2023-Mar-13

General General

The predictive model for COVID-19 pandemic plastic pollution by using deep learning method.

In Scientific reports ; h5-index 158.0

Pandemic plastics (e.g., masks, gloves, aprons, and sanitizer bottles) are global consequences of COVID-19 pandemic-infected waste, which has increased significantly throughout the world. These hazardous wastes play an important role in environmental pollution and indirectly spread COVID-19. Predicting the environmental impacts of these wastes can be used to provide situational management, conduct control procedures, and reduce the COVID-19 effects. In this regard, the presented study attempted to provide a deep learning-based predictive model for forecasting the expansion of the pandemic plastic in the megacities of Iran. As a methodology, a database was gathered from February 27, 2020, to October 10, 2021, for COVID-19 spread and personal protective equipment usage in this period. The dataset was trained and validated using training (80%) and testing (20%) datasets by a deep neural network (DNN) procedure to forecast pandemic plastic pollution. Performance of the DNN-based model is controlled by the confusion matrix, receiver operating characteristic (ROC) curve, and justified by the k-nearest neighbours, decision tree, random forests, support vector machines, Gaussian naïve Bayes, logistic regression, and multilayer perceptron methods. According to the comparative modelling results, the DNN-based model was found to predict more accurately than other methods and have a significant predominance over others with a lower errors rate (MSE = 0.024, RMSE = 0.027, MAPE = 0.025). The ROC curve analysis results (overall accuracy) indicate the DNN model (AUC = 0.929) had the highest score among others.

Nanehkaran Yaser A, Licai Zhu, Azarafza Mohammad, Talaei Sona, Jinxia Xu, Chen Junde, Derakhshani Reza

2023-Mar-13

Radiology Radiology

FactReranker: Fact-guided Reranker for Faithful Radiology Report Summarization

ArXiv Preprint

Automatic radiology report summarization is a crucial clinical task, whose key challenge is to maintain factual accuracy between produced summaries and ground truth radiology findings. Existing research adopts reinforcement learning to directly optimize factual consistency metrics such as CheXBert or RadGraph score. However, their decoding method using greedy search or beam search considers no factual consistency when picking the optimal candidate, leading to limited factual consistency improvement. To address it, we propose a novel second-stage summarizing approach FactReranker, the first attempt that learns to choose the best summary from all candidates based on their estimated factual consistency score. We propose to extract medical facts of the input medical report, its gold summary, and candidate summaries based on the RadGraph schema and design the fact-guided reranker to efficiently incorporate the extracted medical facts for selecting the optimal summary. We decompose the fact-guided reranker into the factual knowledge graph generation and the factual scorer, which allows the reranker to model the mapping between the medical facts of the input text and its gold summary, thus can select the optimal summary even the gold summary can't be observed during inference. We also present a fact-based ranking metric (RadMRR) for measuring the ability of the reranker on selecting factual consistent candidates. Experimental results on two benchmark datasets demonstrate the superiority of our method in generating summaries with higher factual consistency scores when compared with existing methods.

Qianqian Xie, Jinpeng Hu, Jiayu Zhou, Yifan Peng, Fei Wang

2023-03-15

Surgery Surgery

UMRFormer-net: a three-dimensional U-shaped pancreas segmentation method based on a double-layer bridged transformer network.

In Quantitative imaging in medicine and surgery

BACKGROUND : Methods based on the combination of transformer and convolutional neural networks (CNNs) have achieved impressive results in the field of medical image segmentation. However, most of the recently proposed combination segmentation approaches simply treat transformers as auxiliary modules which help to extract long-range information and encode global context into convolutional representations, and there is a lack of investigation on how to optimally combine self-attention with convolution.

METHODS : We designed a novel transformer block (MRFormer) that combines a multi-head self-attention layer and a residual depthwise convolutional block as the basic unit to deeply integrate both long-range and local spatial information. The MRFormer block was embedded between the encoder and decoder in U-Net at the last two layers. This framework (UMRFormer-Net) was applied to the segmentation of three-dimensional (3D) pancreas, and its ability to effectively capture the characteristic contextual information of the pancreas and surrounding tissues was investigated.

RESULTS : Experimental results show that the proposed UMRFormer-Net achieved accuracy in pancreas segmentation that was comparable or superior to that of existing state-of-the-art 3D methods in both the Clinical Proteomic Tumor Analysis Consortium Pancreatic Ductal Adenocarcinoma (CPTAC-PDA) dataset and the public Medical Segmentation Decathlon dataset (self-division). UMRFormer-Net statistically significantly outperformed existing transformer-related methods and state-of-the-art 3D methods (P<0.05, P<0.01, or P<0.001), with a higher Dice coefficient (85.54% and 77.36%, respectively) or a lower 95% Hausdorff distance (4.05 and 8.34 mm, respectively).

CONCLUSIONS : UMRFormer-Net can obtain more matched and accurate segmentation boundary and region information in pancreas segmentation, thus improving the accuracy of pancreas segmentation. The code is available at https://github.com/supersunshinefk/UMRFormer-Net.

Fang Kun, He Baochun, Liu Libo, Hu Haoyu, Fang Chihua, Huang Xuguang, Jia Fucang

2023-Mar-01

Pancreas, U-Net, deep learning, image segmentation, transformer