Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Radiology Radiology

Artificial intelligence evaluating primary thoracic lesions has an overall lower error rate compared to veterinarians or veterinarians in conjunction with the artificial intelligence.

In Veterinary radiology & ultrasound : the official journal of the American College of Veterinary Radiology and the International Veterinary Radiology Association

To date, deep learning technologies have provided powerful decision support systems to radiologists in human medicine. The aims of this retrospective, exploratory study were to develop and describe an artificial intelligence able to screen thoracic radiographs for primary thoracic lesions in feline and canine patients. Three deep learning networks using three different pretraining strategies to predict 15 types of primary thoracic lesions were created (including tracheal collapse, left atrial enlargement, alveolar pattern, pneumothorax, and pulmonary mass). Upon completion of pretraining, the algorithms were provided with over 22 000 thoracic veterinary radiographs for specific training. All radiographs had a report created by a board-certified veterinary radiologist used as the gold standard. The performances of all three networks were compared to one another. An additional 120 radiographs were then evaluated by three types of observers: the best performing network, veterinarians, and veterinarians aided by the network. The error rates for each of the observers was calculated as an overall and for the 15 labels and were compared using a McNemar's test. The overall error rate of the network was significantly better than the overall error rate of the veterinarians or the veterinarians aided by the network (10.7% vs 16.8% vs17.2%, P = .001). The network's error rate was significantly better to detect cardiac enlargement and for bronchial pattern. The current network only provides help in detecting various lesion types and does not provide a diagnosis. Based on its overall very good performance, this could be used as an aid to general practitioners while waiting for the radiologist's report.

Boissady Emilie, de La Comble Alois, Zhu Xiaojuan, Hespel Adrien-Maxence

2020-Sep-29

computer vision-based decision support system, convolutional neural networks, deep learning, small animal thoracic radiology

Public Health Public Health

Machine learning to predict transplant outcomes: helpful or hype? A national cohort study.

In Transplant international : official journal of the European Society for Organ Transplantation

An increasing number of studies claim machine learning (ML) predicts transplant outcomes more accurately. However, these claims were possibly confounded by other factors, namely, supplying new variables to ML models. To better understand the prospects of ML in transplantation, we compared ML to conventional regression in a "common" analytic task: predicting kidney transplant outcomes using national registry data. We studied 133 431 adult deceased-donor kidney transplant recipients between 2005 and 2017. Transplant centers were randomly divided into 70% training set (190 centers/97 787 recipients) and 30% validation set (82 centers/35 644 recipients). Using the training set, we performed regression and ML procedures [gradient boosting (GB) and random forests (RF)] to predict delayed graft function, one-year acute rejection, death-censored graft failure C, all-cause graft failure, and death. Their performances were compared on the validation set using -statistics. In predicting rejection, regression (C = 0.6010.6110.621) actually outperformed GB (C = 0.5810.5910.601) and RF (C = 0.5690.5790.589). For all other outcomes, the C-statistics were nearly identical across methods (delayed graft function, 0.717-0.723; death-censored graft failure, 0.637-0.642; all-cause graft failure, 0.633-0.635; and death, 0.705-0.708). Given its shortcomings in model interpretability and hypothesis testing, ML is advantageous only when it clearly outperforms conventional regression; in the case of transplant outcomes prediction, ML seems more hype than helpful.

Bae Sunjae, Massie Allan B, Caffo Brian S, Jackson Kyle R, Segev Dorry L

2020-Jul-07

kidney transplantation, machine learning, prediction, regression

General General

Artificial intelligence-based framework in evaluating intrafraction motion for liver cancer robotic stereotactic body radiation therapy with fiducial tracking.

In Medical physics ; h5-index 59.0

PURPOSE : This study aimed to design a fully automated framework to evaluate intrafraction motion using orthogonal X-ray images from CyberKnife.

METHODS : The proposed framework includes three modules: (1) automated fiducial marker detection, (2) three-dimensional (3D) position reconstruction and (3) intrafraction motion evaluation. A total of 5927 images from real patients treated with CyberKnife fiducial tracking were collected. The ground truth was established by labeling coarse bounding boxes manually, and binary mask images were then obtained by applying a binary threshold and filter. These images and labels were used to train a detection model using a fully convolutional network (fCN). The output of the detection model can be used to reconstruct the 3D positions of the fiducial markers and then evaluate the intrafraction motion via a rigid transformation. For a patient test, the motion amplitudes, rotations and fiducial cohort deformations were calculated used the developed framework for 13 patients with a total of 52 fractions.

RESULTS : The precision and recall of the fiducial marker detection model were 98.6% and 95.6%, respectively, showing high model performance. The mean (±SD) centroid error between the predicted fiducial markers and the ground truth was 0.25±0.47 pixels on the test data. For intrafraction motion evaluation, the mean (±SD) translations in the superior-posterior (SI), left-right (LR) and anterior-posterior (AP) directions were 13.1±2.2 mm, 2.0±0.4 mm and 5.2±1.4 mm, respectively, and the mean (±SD) rotations in the roll, pitch and yaw directions were 2.9±1.5°, 2.5±1.5° and 3.1±2.2°. Seventy-one percent of the fractions had rotations larger than the system limitations. With rotation correction during rigid registration, only 2 of the 52 fractions had residual errors larger than 2 mm in any direction, while without rotation correction, the probability of large residual errors increased to 46.2%.

CONCLUSION : We developed a framework with high performance and accuracy for automatic fiducial marker detection, which can be used to evaluate intrafraction motion using orthogonal X-ray images from CyberKnife. For liver patients, most fractions have fiducial cohort rotations larger than the system limitations; however, the fiducial cohort deformation is small, especially for the scenario with rotation correction.

Liang Zhiwen, Zhou Qichao, Yang Jing, Zhang Lian, Liu Dong, Tu Biao, Zhang Sheng

2020-Sep-29

convolutional neural network (CNN), fiducial marker detection, intrafraction motion, liver, stereotactic body radiotherapy (SBRT)

General General

The application of deep learning for the classification of correct and incorrect SNP genotypes from whole-genome DNA sequencing pipelines.

In Journal of applied genetics

A downside of next-generation sequencing technology is the high technical error rate. We built a tool, which uses array-based genotype information to classify next-generation sequencing-based SNPs into the correct and the incorrect calls. The deep learning algorithms were implemented via Keras. Several algorithms were tested: (i) the basic, naïve algorithm, (ii) the naïve algorithm modified by pre-imposing different weights on incorrect and correct SNP class in calculating the loss metric and (iii)-(v) the naïve algorithm modified by random re-sampling (with replacement) of the incorrect SNPs to match 30%/60%/100% of the number of correct SNPs. The training data set was composed of data from three bulls and consisted of 2,227,995 correct (97.94%) and 46,920 incorrect SNPs, while the validation data set consisted of data from one bull with 749,506 correct (98.05%) and 14,908 incorrect SNPs. The results showed that for a rare event classification problem, like incorrect SNP detection in NGS data, the most parsimonious naïve model and a model with the weighting of SNP classes provided the best results for the classification of the validation data set. Both classified 19% of truly incorrect SNPs as incorrect and 99% of truly correct SNPs as correct and resulted in the F1 score of 0.21 - the highest among the compared algorithms. We conclude the basic models were less adapted to the specificity of a training data set and thus resulted in better classification of the independent, validation data set, than the other tested models.

Kotlarz Krzysztof, Mielczarek Magda, Suchocki Tomasz, Czech Bartosz, Guldbrandtsen Bernt, Szyda Joanna

2020-Sep-29

Classification, Keras, Next-generation sequencing, Python, SNP calling, SNP microarray, TensorFlow

General General

Enhancement of needle visualization and localization in ultrasound.

In International journal of computer assisted radiology and surgery

PURPOSE : This scoping review covers needle visualization and localization techniques in ultrasound, where localization-based approaches mostly aim to compute the needle shaft (and tip) location while potentially enhancing its visibility too.

METHODS : A literature review is conducted on the state-of-the-art techniques, which could be divided into five categories: (1) signal and image processing-based techniques to augment the needle, (2) modifications to the needle and insertion to help with needle-transducer alignment and visibility, (3) changes to ultrasound image formation, (4) motion-based analysis and (5) machine learning.

RESULTS : Advantages, limitations and challenges of representative examples in each of the categories are discussed. Evaluation techniques performed in ex vivo, phantom and in vivo studies are discussed and summarized.

CONCLUSION : Greatest limitation of the majority of the literature is that they rely on original visibility of the needle in the static image. Need for additional/improved apparatus is the greatest limitation toward clinical utility in practice.

SIGNIFICANCE : Ultrasound-guided needle placement is performed in many clinical applications, including biopsies, treatment injections and anesthesia. Despite the wide range and long history of this technique, an ongoing challenge is needle visibility in ultrasound. A robust technique to enhance ultrasonic needle visibility, especially for steeply inserted hand-held needles, and while maintaining clinical utility requirements is needed.

Beigi Parmida, Salcudean Septimiu E, Ng Gary C, Rohling Robert

2020-Sep-30

Image processing, Image-guidance, Machine learning, Needle detection, Needle visualization, Ultrasound

General General

Outcomes associated with SARS-CoV-2 viral clades in COVID-19.

In medRxiv : the preprint server for health sciences

Background The COVID-19 epidemic of 2019-20 is due to the novel coronavirus SARS-CoV-2. Following first case description in December, 2019 this virus has infected over 10 million individuals and resulted in at least 500,000 deaths world-wide. The virus is undergoing rapid mutation, with two major clades of sequence variants emerging. This study sought to determine whether SARS-CoV-2 sequence variants are associated with differing outcomes among COVID-19 patients in a single medical system. Methods Whole genome SARS-CoV-2 RNA sequence was obtained from isolates collected from patients registered in the University of Washington Medicine health system between March 1 and April 15, 2020. Demographic and baseline medical data along with outcomes of hospitalization and death were collected. Statistical and machine learning models were applied to determine if viral genetic variants were associated with specific outcomes of hospitalization or death. Findings Full length SARS-CoV-2 sequence was obtained 190 subjects with clinical outcome data. 35 (18.4%) were hospitalized and 14 (7.4%) died from complications of infection. A total of 289 single nucleotide variants were identified. Clustering methods demonstrated two major viral clades, which could be readily distinguished by 12 polymorphisms in 5 genes. A trend toward higher rates of hospitalization of patients with Clade 2 was observed (p=0.06). Machine learning models utilizing patient demographics and co-morbidities achieved area-under-the-curve (AUC) values of 0.93 for predicting hospitalization. Addition of viral clade or sequence information did not significantly improve models for outcome prediction. Conclusion SARS-CoV-2 shows substantial sequence diversity in a community-based sample. Two dominant clades of virus are in circulation. Among patients sufficiently ill to warrant testing for virus, no significant difference in outcomes of hospitalization or death could be discerned between clades in this sample. Major risk factors for hospitalization and death for either major clade of virus include patient age and comorbid conditions.

Nakamichi Kenji, Shen Jolie Zhu, Lee Cecilia S, Lee Aaron Y, Roberts Emma Adaline, Simonson Paul D, Roychoudhury Pavitra, Andriesen Jessica G, Randhawa April K, Mathias Patrick C, Greninger Alex, Jerome Keith R, Van Gelder Russell N

2020-Sep-25