Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Radiology Radiology

Development of a volumetric pancreas segmentation CT dataset for AI applications through trained technologists: a study during the COVID 19 containment phase.

In Abdominal radiology (New York)

PURPOSE : To evaluate the performance of trained technologists vis-à-vis radiologists for volumetric pancreas segmentation and to assess the impact of supplementary training on their performance.

METHODS : In this IRB-approved study, 22 technologists were trained in pancreas segmentation on portal venous phase CT through radiologist-led interactive videoconferencing sessions based on an image-rich curriculum. Technologists segmented pancreas in 188 CTs using freehand tools on custom image-viewing software. Subsequent supplementary training included multimedia videos focused on common errors, which were followed by second batch of 159 segmentations. Two radiologists reviewed all cases and corrected inaccurate segmentations. Technologists' segmentations were compared against radiologists' segmentations using Dice-Sorenson coefficient (DSC), Jaccard coefficient (JC), and Bland-Altman analysis.

RESULTS : Corrections were made in 71 (38%) cases from first batch [26 (37%) oversegmentations and 45 (63%) undersegmentations] and in 77 (48%) cases from second batch [12 (16%) oversegmentations and 65 (84%) undersegmentations]. DSC, JC, false positive (FP), and false negative (FN) [mean (SD)] in first versus second batches were 0.63 (0.15) versus 0.63 (0.16), 0.48 (0.15) versus 0.48 (0.15), 0.29 (0.21) versus 0.21 (0.10), and 0.36 (0.20) versus 0.43 (0.19), respectively. Differences were not significant (p > 0.05). However, range of mean pancreatic volume difference reduced in the second batch [- 2.74 cc (min - 92.96 cc, max 87.47 cc) versus - 23.57 cc (min - 77.32, max 30.19)].

CONCLUSION : Trained technologists could perform volumetric pancreas segmentation with reasonable accuracy despite its complexity. Supplementary training further reduced range of volume difference in segmentations. Investment into training technologists could augment and accelerate development of body imaging datasets for AI applications.

Suman Garima, Panda Ananya, Korfiatis Panagiotis, Edwards Marie E, Garg Sushil, Blezek Daniel J, Chari Suresh T, Goenka Ajit H

2020-Sep-16

Artificial intelligence, COVID-19, Data curation, Deep learning

Radiology Radiology

SCREENet: A Multi-view Deep Convolutional Neural Network for Classification of High-resolution Synthetic Mammographic Screening Scans

ArXiv Preprint

Purpose: To develop and evaluate the accuracy of a multi-view deep learning approach to the analysis of high-resolution synthetic mammograms from digital breast tomosynthesis screening cases, and to assess the effect on accuracy of image resolution and training set size. Materials and Methods: In a retrospective study, 21,264 screening digital breast tomosynthesis (DBT) exams obtained at our institution were collected along with associated radiology reports. The 2D synthetic mammographic images from these exams, with varying resolutions and data set sizes, were used to train a multi-view deep convolutional neural network (MV-CNN) to classify screening images into BI-RADS classes (BI-RADS 0, 1 and 2) before evaluation on a held-out set of exams. Results: Area under the receiver operating characteristic curve (AUC) for BI-RADS 0 vs non-BI-RADS 0 class was 0.912 for the MV-CNN trained on the full dataset. The model obtained accuracy of 84.8%, recall of 95.9% and precision of 95.0%. This AUC value decreased when the same model was trained with 50% and 25% of images (AUC = 0.877, P=0.010 and 0.834, P=0.009 respectively). Also, the performance dropped when the same model was trained using images that were under-sampled by 1/2 and 1/4 (AUC = 0.870, P=0.011 and 0.813, P=0.009 respectively). Conclusion: This deep learning model classified high-resolution synthetic mammography scans into normal vs needing further workup using tens of thousands of high-resolution images. Smaller training data sets and lower resolution images both caused significant decrease in performance.

Saeed Seyyedi, Margaret J. Wong, Debra M. Ikeda, Curtis P. Langlotz

2020-09-18

Surgery Surgery

Improvement of nerve imaging speed with coherent anti-Stokes Raman scattering rigid endoscope using deep-learning noise reduction.

In Scientific reports ; h5-index 158.0

A coherent anti-Stokes Raman scattering (CARS) rigid endoscope was developed to visualize peripheral nerves without labeling for nerve-sparing endoscopic surgery. The developed CARS endoscope had a problem with low imaging speed, i.e. low imaging rate. In this study, we demonstrate that noise reduction with deep learning boosts the nerve imaging speed with CARS endoscopy. We employ fine-tuning and ensemble learning and compare deep learning models with three different architectures. In the fine-tuning strategy, deep learning models are pre-trained with CARS microscopy nerve images and retrained with CARS endoscopy nerve images to compensate for the small dataset of CARS endoscopy images. We propose using the equivalent imaging rate (EIR) as a new evaluation metric for quantitatively and directly assessing the imaging rate improvement by deep learning models. The highest EIR of the deep learning model was 7.0 images/min, which was 5 times higher than that of the raw endoscopic image of 1.4 images/min. We believe that the improvement of the nerve imaging speed will open up the possibility of reducing postoperative dysfunction by intraoperative nerve identification.

Yamato Naoki, Niioka Hirohiko, Miyake Jun, Hashimoto Mamoru

2020-Sep-16

General General

Improved haplotype inference by exploiting long-range linking and allelic imbalance in RNA-seq datasets.

In Nature communications ; h5-index 260.0

Haplotype reconstruction of distant genetic variants remains an unsolved problem due to the short-read length of common sequencing data. Here, we introduce HapTree-X, a probabilistic framework that utilizes latent long-range information to reconstruct unspecified haplotypes in diploid and polyploid organisms. It introduces the observation that differential allele-specific expression can link genetic variants from the same physical chromosome, thus even enabling using reads that cover only individual variants. We demonstrate HapTree-X's feasibility on in-house sequenced Genome in a Bottle RNA-seq and various whole exome, genome, and 10X Genomics datasets. HapTree-X produces more complete phases (up to 25%), even in clinically important genes, and phases more variants than other methods while maintaining similar or higher accuracy and being up to 10×  faster than other tools. The advantage of HapTree-X's ability to use multiple lines of evidence, as well as to phase polyploid genomes in a single integrative framework, substantially grows as the amount of diverse data increases.

Berger Emily, Yorukoglu Deniz, Zhang Lillian, Nyquist Sarah K, Shalek Alex K, Kellis Manolis, Numanagić Ibrahim, Berger Bonnie

2020-09-16

General General

Publisher Correction: Improving the accuracy of medical diagnosis with causal machine learning.

In Nature communications ; h5-index 260.0

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

Richens Jonathan G, Lee Ciarán M, Johri Saurabh

2020-09-16

General General

A system for designing removable partial dentures using artificial intelligence. Part 1. Classification of partially edentulous arches using a convolutional neural network.

In Journal of prosthodontic research

PURPOSE : The purpose of this study was to develop a method for classifying dental arches using a convolutional neural network (CNN) as the first step in a system for designing removable partial dentures.

METHODS : Using 1184 images of dental arches (maxilla: 748 images; mandible: 436 images), arches were classified into four arch types: edentulous, intact dentition, arches with posterior tooth loss, and arches with bounded edentulous space. A CNN method to classify images was developed using Tensorflow and Keras deep learning libraries. After completion of the learning procedure, the diagnostic accuracy, precision, recall, F-measure and area under the curve (AUC) for each jaw were calculated for diagnostic performance of learning. The classification was also predicted using other images, and percentages of correct predictions (PCPs) were calculated. The PCPs were compared with the Kruskal-Wallis test (p = 0.05).

RESULTS : The diagnostic accuracy was 99.5% for the maxilla and 99.7% for the mandible. The precision, recall, and F-measure for both jaws were 0.25, 1.0 and 0.4, respectively. The AUC was 0.99 for the maxilla and 0.98 for the mandible. The PCPs of the classifications were more than 95% for all types of dental arch. There were no significant differences among the four types of dental arches in the mandible.

CONCLUSIONS : The results of this study suggest that dental arches can be classified and predicted using a CNN. Future development of systems for designing removable partial dentures will be made possible using this and other AI technologies.

Takahashi Toshihito, Nozaki Kazunori, Gonda Tomoya, Ikebe Kazunori

2020-Sep-09

Artificial intelligence, Convolutional neural network, Machine learning, Removable partial denture