Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

oncology Oncology

Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy.

In Biomedical physics & engineering express

Electron density maps must be accurately estimated to achieve valid dose calculation in MR-only radiotherapy. The goal of this study is to assess whether two deep learning models, the conditional generative adversarial network (cGAN) and the cycle-consistent generative adversarial network (cycleGAN), can generate accurate abdominal synthetic CT (sCT) images from 0.35T MR images for MR-only liver radiotherapy. A retrospective study was performed using CT images and 0.35T MR images of 12 patients with liver (n = 8) and non-liver abdominal (n = 4) cancer. CT images were deformably registered to the corresponding MR images to generate deformed CT (dCT) images for treatment planning. Both cGAN and cycleGAN were trained using MR and dCT transverse slices. Four-fold cross-validation testing was conducted to generate sCT images for all patients. The HU prediction accuracy was evaluated by voxel-wise similarity metric between each dCT and sCT image for all 12 patients. dCT-based and sCT-based dose distributions were compared using gamma and dose-volume histogram (DVH) metric analysis for 8 liver patients. sCTcycleGAN achieved the average mean absolute error (MAE) of 94.1 HU, while sCTcGAN achieved 89.8 HU. In both models, the average gamma passing rates within all volumes of interest were higher than 95% using a 2%, 2 mm criterion, and 99% using a 3%, 3 mm criterion. The average differences in the mean dose and DVH metrics were within ±0.6% for the planning target volume and within ±0.15% for evaluated organs in both models. Results: demonstrated that abdominal sCT images generated by both cGAN and cycleGAN achieved accurate dose calculation for 8 liver radiotherapy plans. sCTcGAN images had smaller average MAE and achieved better dose calculation accuracy than sCTcyleGAN images. More abdominal patients will be enrolled in the future to further evaluate the two models.

Fu Jie, Singhrao Kamal, Cao Minsong, Yu Victoria, Santhanam Anand P, Yang Yingli, Guo Minghao, Raldow Ann C, Ruan Dan, Lewis John H


General General

An Efficient Method for Coronavirus Detection Through X-rays using deep Neural Network.

In Current medical imaging

BACKGROUND : Coronavirus (COVID-19) is a group of infectious diseases caused by related viruses called coronaviruses. In humans, the seriousness of infection caused by a coronavirus in the respiratory tract can vary from mild to lethal. A serious illness can be developed in old people and those with underlying medical problems like diabetes, cardiovascular disease, cancer, and chronic respiratory disease. For the diagnosis of the coronavirus disease, due to the growing number of cases, a limited number of test kits for COVID-19 are available in the hospitals. Hence, it is important to implement an automated system as an immediate alternative diagnostic option to pause the spread of COVID-19 in the population.

OBJECTIVE : This paper proposes a deep learning model for classification of coronavirus infected patient detection using chest X-ray radiographs.

METHODS : A fully connected convolutional neural network model is developed to classify healthy and diseased X-ray radiographs. The proposed neural network model consists of seven convolutional layers with rectified linear unit, softmax (last layer) activation functions and max pooling layers which were trained using the publicly available COVID-19 dataset.

RESULTS AND CONCLUSION : For validation of the proposed model, the publicly available chest X-ray radiograph dataset consisting COVID-19 and normal patient's images were used. Considering the performance of the results that are evaluated based on various evaluation metrics such as precision, recall, MSE, RMSE & accuracy, it is seen that the accuracy of the proposed CNN model is 98.07%.

Rao P Srinivasa, Bheemavarapu Pradeep, Kalyampudi P S Latha, Rao T V Madhusudhana


Coronavirus, VGG19, chest x-ray radiographs, convolutional neural network., covid-19, real time – polymerase chain reaction

General General

Automated Adult Epilepsy Diagnostic Tool Based on Interictal Scalp Electroencephalogram Characteristics: A Six-Center Study.

In International journal of neural systems

The diagnosis of epilepsy often relies on a reading of routine scalp electroencephalograms (EEGs). Since seizures are highly unlikely to be detected in a routine scalp EEG, the primary diagnosis depends heavily on the visual evaluation of Interictal Epileptiform Discharges (IEDs). This process is tedious, expert-centered, and delays the treatment plan. Consequently, the development of an automated, fast, and reliable epileptic EEG diagnostic system is essential. In this study, we propose a system to classify EEG as epileptic or normal based on multiple modalities extracted from the interictal EEG. The ensemble system consists of three components: a Convolutional Neural Network (CNN)-based IED detector, a Template Matching (TM)-based IED detector, and a spectral feature-based classifier. We evaluate the system on datasets from six centers from the USA, Singapore, and India. The system yields a mean Leave-One-Institution-Out (LOIO) cross-validation (CV) area under curve (AUC) of 0.826 (balanced accuracy (BAC) of 76.1%) and Leave-One-Subject-Out (LOSO) CV AUC of 0.812 (BAC of 74.8%). The LOIO results are found to be similar to the interrater agreement (IRA) reported in the literature for epileptic EEG classification. Moreover, as the proposed system can process routine EEGs in a few seconds, it may aid the clinicians in diagnosing epilepsy efficiently.

Thomas John, Thangavel Prasanth, Peh Wei Yan, Jing Jin, Yuvaraj Rajamanickam, Cash Sydney S, Chaudhari Rima, Karia Sagar, Rathakrishnan Rahul, Saini Vinay, Shah Nilesh, Srivastava Rohit, Tan Yee-Leng, Westover Brandon, Dauwels Justin


EEG classification, Epilepsy, convolutional neural networks, deep learning, interictal epileptiform discharges, multi-center study, spike detection

General General

Design of 1-year mortality forecast at hospital admission: A machine learning approach.

In Health informatics journal ; h5-index 25.0

Palliative care is referred to a set of programs for patients that suffer life-limiting illnesses. These programs aim to maximize the quality of life (QoL) for the last stage of life. They are currently based on clinical evaluation of the risk of 1-year mortality. The main aim of this work is to develop and validate machine-learning-based models to predict the exitus of a patient within the next year using data gathered at hospital admission. Five machine-learning techniques were applied using a retrospective dataset. The evaluation was performed with five metrics computed by a resampling strategy: Accuracy, the area under the ROC curve, Specificity, Sensitivity, and the Balanced Error Rate. All models reported an AUC ROC from 0.857 to 0.91. Specifically, Gradient Boosting Classifier was the best model, producing an AUC ROC of 0.91, a sensitivity of 0.858, a specificity of 0.808, and a BER of 0.1687. Information from standard procedures at hospital admission combined with machine learning techniques produced models with competitive discriminative power. Our models reach the best results reported in the state of the art. These results demonstrate that they can be used as an accurate data-driven palliative care criteria inclusion.

Blanes-Selva Vicent, Ruiz-García Vicente, Tortajada Salvador, Benedí José-Miguel, Valdivieso Bernardo, García-Gómez Juan M

hospital admission data, machine learning, mortality forecast, palliative care

General General

Light microscopic iris classification using ensemble multi-class support vector machine.

In Microscopy research and technique

Similar to other biometric systems such as fingerprint, face, DNA, iris classification could assist law enforcement agencies in identifying humans. Iris classification technology helps law-enforcement agencies to recognize humans by matching their iris with iris data sets. However, iris classification is challenging in the real environment due to its invertible and complex texture variations in the human iris. Accordingly, this article presents an improved Oriented FAST and Rotated BRIEF with Bag-of-Words model to extract distinct and robust features from the iris image, followed by ensemble multi-class-SVM to classify iris. The proposed methodology consists of four main steps; first, iris image normalization and enhancement; second, localizing iris region; third, iris feature extraction; finally, iris classification using ensemble multi-class support vector machine. For preprocessing of input images, histogram equalization, Gaussian mask and median filters are applied. The proposed technique is tested on two benchmark databases, that is, CASIA-v1 and iris image database, and achieved higher accuracy than other existing techniques reported in state of the art.

Rehman Amjad


ORB, SIFT, SURF, bag-of-words (BoW), ensemble multi-class-SVM (EMC-SVM), health risks, health system

Surgery Surgery

Classification of Neurofibromatosis-related Dystrophic or Nondystrophic Scoliosis Based on Image-Features using Bilateral CNN.

In Medical physics ; h5-index 59.0

PURPOSE : We developed a system that can automatically classify cases of scoliosis secondary to neurofibromatosis type 1 (NF1-S) using deep learning algorithms (DLAs) and improve the accuracy and effectiveness of classification, thereby assisting surgeons with the auxiliary diagnosis.

METHODS : Comprehensive experiments in NF1 classification were performed based on a dataset consisting 211 NF1-S (131 dystrophic and 80 nondystrophic NF1-S) patients. Additionally, 100 congenital scoliosis (CS), 100 adolescent idiopathic scoliosis (AIS) patients, and 114 normal controls were used for experiments in primary classification. For identification of NF1-S with nondystrophic or dystrophic curves, we devised a novel network (i.e., Bilateral CNN) utilizing a bilinear-like operation to discover the similar interest features between whole spine AP and lateral X-ray images. The performance of Bilateral CNN was compared with spine surgeons, conventional DLAs (i.e., VGG-16, ResNet-50, BCNN), recently proposed DLAs (i.e., ShuffleNet, MobileNet, EfficientNet), and Two-path BCNN which was the extension of BCNN using AP and lateral X-ray images as inputs.

RESULTS : In NF1 classification, our proposed Bilateral CNN with 80.36% accuracy outperformed the other seven DLAs ranging from 61.90% to 76.19% with five-fold cross-validation. It also outperformed the spine surgeons (with an average accuracy of 77.5% for the senior surgeons and 65.0% for the junior surgeons). Our method is highly generalizable due to the proposed methodology and data augmentation. Furthermore, the heatmaps extracted by Bilateral CNN showed curve pattern and morphology of ribs and vertebrae contributing most to the classification results. In primary classification, our proposed method with an accuracy of 87.92% also outperformed all the other methods with varied accuracies between 52.58% to 83.35% with five-fold cross-validation.

CONCLUSIONS : The proposed Bilateral CNN can automatically capture representative features for classifying NF1-S utilizing AP and lateral X-ray images, leading to a relatively good performance. Moreover, the proposed method can identify other spine deformities for auxiliary diagnosis.

He Zhong, Wang Yimu, Qin Xiaodong, Yin Rui, Qiu Yong, He Kelei, Zhu Zezhang


Bilateral CNN, Deep learning algorithms, Neurofibromatosis type 1, Scoliosis classification