Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Automated segmentation of the left ventricle from MR cine imaging based on deep learning architecture.

In Biomedical physics & engineering express

BACKGROUND : Magnetic resonance cine imaging is the accepted standard for cardiac functional assessment. Left ventricular (LV) segmentation plays a key role in volumetric functional quantification of the heart. Conventional manual analysis is time-consuming and observer-dependent. Automated segmentation approaches are needed to improve the clinical workflow of cardiac functional quantification. Recently, deep-learning networks have shown promise for efficient LV segmentation.

PURPOSE : The routinely used V-Net is a convolutional network that segments images by passing features from encoder to decoder. In this study, this method was advanced as DenseV-Net by replacing the convolutional block with a densely connected algorithm and dense calculations to alleviate the vanishing-gradient problem, prevent exploding gradients, and to strengthen feature propagation. Thirty patients were scanned with a 3 Tesla MR imager. ECG-free, free-breathing, real-time cines were acquired with a balanced steady-state free precession technique. Linear regression and the dice similarity coefficient (DSC) were performed to evaluate LV segmentation performance of the classic neural networks FCN, UNet, V-Net, and the proposed DenseV-net methods, using manual analysis as the reference. Slice-based LV function was compared among the four methods.

RESULTS : Thirty slices from eleven patients were randomly selected (each slice contained 73 images), and the LVs were segmented using manual analysis, UNet, FCN, V-Net, and the proposed DenseV-Net methods. A strong correlation of the left ventricular areas was observed between the proposed DenseV-Net network and manual segmentation (R = 0.92), with a mean DSC of 0.90 ± 0.12. A weaker correlation was found between the routine V-Net, UNet, FCN, and manual segmentation methods (R = 0.77, 0.74, 0.76, respectively) with a lower mean DSC (0.85 ± 0.13, 0.84 ± 0.16, 0.79 ± 0.17, respectively). Additionally, the proposed DenseV-Net method was strongly correlated with the manual analysis in slice-based LV function quantification compared with the state-of-art neural network methods V-Net, UNet, and FCN.

CONCLUSION : The proposed DenseV-Net method outperforms the classic convolutional networks V-Net, UNet, and FCN in automated LV segmentation, providing a novel way for efficient heart functional quantification and the diagnosis of cardiac diseases using cine MRI.

Qin Wenjian, Wu Yin, Li Siyue, Chen Yucheng, Yang Yongfeng, Liu Xin, Zheng Hairong, Liang Dong, Hu Zhanli


Radiology Radiology

Liver lesion localisation and classification with convolutional neural networks: a comparison between conventional and spectral computed tomography.

In Biomedical physics & engineering express

PURPOSE : To evaluate the benefit of the additional available information present in spectral CT datasets, as compared to conventional CT datasets, when utilizing convolutional neural networks for fully automatic localisation and classification of liver lesions in CT images.

MATERIALS AND METHODS : Conventional and spectral CT images (iodine maps, virtual monochromatic images (VMI)) were obtained from a spectral dual-layer CT system. Patient diagnosis were known from the clinical reports and classified into healthy, cyst and hypodense metastasis. In order to compare the value of spectral versus conventional datasets when being passed as input to machine learning algorithms, we implemented a weakly-supervised convolutional neural network (CNN) that learns liver lesion localisation without pixel-level ground truth annotations. Regions-of-interest are selected automatically based on the localisation results and are used to train a second CNN for liver lesion classification (healthy, cyst, hypodense metastasis). The accuracy of lesion localisation was evaluated using the Euclidian distances between the ground truth centres of mass and the predicted centres of mass. Lesion classification was evaluated by precision, recall, accuracy and F1-Score.

RESULTS : Lesion localisation showed the best results for spectral information with distances of 8.22 ± 10.72 mm, 8.78 ± 15.21 mm and 8.29 ± 12.97 mm for iodine maps, 40 keV and 70 keV VMIs, respectively. With conventional data distances of 10.58 ± 17.65 mm were measured. For lesion classification, the 40 keV VMIs achieved the highest overall accuracy of 0.899 compared to 0.854 for conventional data.

CONCLUSION : An enhanced localisation and classification is reported for spectral CT data, which demonstrates that combining machine-learning technology with spectral CT information may in the future improve the clinical workflow as well as the diagnostic accuracy.

Shapira Nadav, Fokuhl Julia, Schultheiß Manuel, Beck Stefanie, Kopp Felix K, Pfeiffer Daniela, Dangelmaier Julia, Pahn Gregor, Sauter Andreas P, Renger Bernhard, Fingerle Alexander A, Rummeny Ernst J, Albarqouni Shadi, Navab Nassir, Noël Peter B


General General

Smart Insulin Pens: Advancing Digital Transformation and a Connected Diabetes Care Ecosystem.

In Journal of diabetes science and technology ; h5-index 38.0

With the first commercially available smart insulin pens, the predominant insulin delivery device for millions of people living with diabetes is now coming into the digital age. Smart insulin pens (SIPs) have the potential to reshape a connected diabetes care ecosystem for patients, providers, and health systems. Existing SIPs are enhanced with real-time wireless connectivity, digital dose capture, and integration with personalized dosing decision support. Automatic dose capture can promote effective retrospective review of insulin dose data, particularly when paired with glucose data. Patients, providers, and diabetes care teams will be able to make increasingly data-driven decisions and recommendations, in real time, during scheduled visits, and in a more continuous, asynchronous care model. As SIPs continue to progress along the path of digital transformation, we can expect additional benefits: iteratively improving software, machine learning, and advanced decision support. Both these technological advances, and future care delivery models with asynchronous interactions, will depend on easy, open, and continuous data exchange between the growing number of diabetes devices. SIPs have a key role in modernizing diabetes care for a large population of people living with diabetes.

Kompala Tejaswi, Neinstein Aaron B


diabetes mellitus, digital health, insulin delivery, telehealth

General General

Predicting the COVID-19 infection with fourteen clinical features using machine learning classification algorithms.

In Multimedia tools and applications

While the RT-PCR is the silver bullet test for confirming the COVID-19 infection, it is limited by the lack of reagents, time-consuming, and the need for specialized labs. As an alternative, most of the prior studies have focused on Chest CT images and Chest X-Ray images using deep learning algorithms. However, these two approaches cannot always be used for patients' screening due to the radiation doses, high costs, and the low number of available devices. Hence, there is a need for a less expensive and faster diagnostic model to identify the positive and negative cases of COVID-19. Therefore, this study develops six predictive models for COVID-19 diagnosis using six different classifiers (i.e., BayesNet, Logistic, IBk, CR, PART, and J48) based on 14 clinical features. This study retrospected 114 cases from the Taizhou hospital of Zhejiang Province in China. The results showed that the CR meta-classifier is the most accurate classifier for predicting the positive and negative COVID-19 cases with an accuracy of 84.21%. The results could help in the early diagnosis of COVID-19, specifically when the RT-PCR kits are not sufficient for testing the infection and assist countries, specifically the developing ones that suffer from the shortage of RT-PCR tests and specialized laboratories.

Arpaci Ibrahim, Huang Shigao, Al-Emran Mostafa, Al-Kabi Mohammed N, Peng Minfei


COVID-19, Classification algorithms, Diagnosis, Machine learning, Novel coronavirus, Prediction

oncology Oncology

Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy.

In Biomedical physics & engineering express

Electron density maps must be accurately estimated to achieve valid dose calculation in MR-only radiotherapy. The goal of this study is to assess whether two deep learning models, the conditional generative adversarial network (cGAN) and the cycle-consistent generative adversarial network (cycleGAN), can generate accurate abdominal synthetic CT (sCT) images from 0.35T MR images for MR-only liver radiotherapy. A retrospective study was performed using CT images and 0.35T MR images of 12 patients with liver (n = 8) and non-liver abdominal (n = 4) cancer. CT images were deformably registered to the corresponding MR images to generate deformed CT (dCT) images for treatment planning. Both cGAN and cycleGAN were trained using MR and dCT transverse slices. Four-fold cross-validation testing was conducted to generate sCT images for all patients. The HU prediction accuracy was evaluated by voxel-wise similarity metric between each dCT and sCT image for all 12 patients. dCT-based and sCT-based dose distributions were compared using gamma and dose-volume histogram (DVH) metric analysis for 8 liver patients. sCTcycleGAN achieved the average mean absolute error (MAE) of 94.1 HU, while sCTcGAN achieved 89.8 HU. In both models, the average gamma passing rates within all volumes of interest were higher than 95% using a 2%, 2 mm criterion, and 99% using a 3%, 3 mm criterion. The average differences in the mean dose and DVH metrics were within ±0.6% for the planning target volume and within ±0.15% for evaluated organs in both models. Results: demonstrated that abdominal sCT images generated by both cGAN and cycleGAN achieved accurate dose calculation for 8 liver radiotherapy plans. sCTcGAN images had smaller average MAE and achieved better dose calculation accuracy than sCTcyleGAN images. More abdominal patients will be enrolled in the future to further evaluate the two models.

Fu Jie, Singhrao Kamal, Cao Minsong, Yu Victoria, Santhanam Anand P, Yang Yingli, Guo Minghao, Raldow Ann C, Ruan Dan, Lewis John H


General General

An Efficient Method for Coronavirus Detection Through X-rays using deep Neural Network.

In Current medical imaging

BACKGROUND : Coronavirus (COVID-19) is a group of infectious diseases caused by related viruses called coronaviruses. In humans, the seriousness of infection caused by a coronavirus in the respiratory tract can vary from mild to lethal. A serious illness can be developed in old people and those with underlying medical problems like diabetes, cardiovascular disease, cancer, and chronic respiratory disease. For the diagnosis of the coronavirus disease, due to the growing number of cases, a limited number of test kits for COVID-19 are available in the hospitals. Hence, it is important to implement an automated system as an immediate alternative diagnostic option to pause the spread of COVID-19 in the population.

OBJECTIVE : This paper proposes a deep learning model for classification of coronavirus infected patient detection using chest X-ray radiographs.

METHODS : A fully connected convolutional neural network model is developed to classify healthy and diseased X-ray radiographs. The proposed neural network model consists of seven convolutional layers with rectified linear unit, softmax (last layer) activation functions and max pooling layers which were trained using the publicly available COVID-19 dataset.

RESULTS AND CONCLUSION : For validation of the proposed model, the publicly available chest X-ray radiograph dataset consisting COVID-19 and normal patient's images were used. Considering the performance of the results that are evaluated based on various evaluation metrics such as precision, recall, MSE, RMSE & accuracy, it is seen that the accuracy of the proposed CNN model is 98.07%.

Rao P Srinivasa, Bheemavarapu Pradeep, Kalyampudi P S Latha, Rao T V Madhusudhana


Coronavirus, VGG19, chest x-ray radiographs, convolutional neural network., covid-19, real time – polymerase chain reaction