Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Surgery Surgery

Augmented Realities, Artificial Intelligence, and Machine Learning: Clinical Implications and How Technology Is Shaping the Future of Medicine.

In Journal of clinical medicine

Technology has been integrated into every facet of human life, and whether it is completely advantageous remains unknown, but one thing is for sure; we are dependent on technology. Medical advances from the integration of artificial intelligence, machine learning, and augmented realities are widespread and have helped countless patients. Much of the advanced technology utilized by medical providers today has been borrowed and extrapolated from other industries. There remains no great collaboration between providers and engineers, which may be why medicine is only in its infancy of innovation with regards to advanced technologic integration. The purpose of this narrative review is to highlight the different technologies currently being utilized in a variety of medical specialties. Furthermore, we hope that by bringing attention to one shortcoming of the medical community, we may inspire future innovators to seek collaboration outside of the purely medical community for the betterment of all patients seeking care.

Moawad Gaby N, Elkhalil Jad, Klebanoff Jordan S, Rahman Sara, Habib Nassir, Alkatout Ibrahim

2020-Nov-25

artificial intelligence, augmented reality, machine learning, surgery

Radiology Radiology

A Deep-Learning Diagnostic Support System for the Detection of COVID-19 Using Chest Radiographs: A Multireader Validation Study.

In Investigative radiology ; h5-index 46.0

** : The aim of this study was to compare a diagnosis support system to detect COVID-19 pneumonia on chest radiographs (CXRs) against radiologists of various levels of expertise in chest imaging.

MATERIALS AND METHODS : Five publicly available databases comprising normal CXR, confirmed COVID-19 pneumonia cases, and other pneumonias were used. After the harmonization of the data, the training set included 7966 normal cases, 5451 with other pneumonia, and 258 CXRs with COVID-19 pneumonia, whereas in the testing data set, each category was represented by 100 cases. Eleven blinded radiologists with various levels of expertise independently read the testing data set. The data were analyzed separately with the newly proposed artificial intelligence-based system and by consultant radiologists and residents, with respect to positive predictive value (PPV), sensitivity, and F-score (harmonic mean for PPV and sensitivity). The χ test was used to compare the sensitivity, specificity, accuracy, PPV, and F-scores of the readers and the system.

RESULTS : The proposed system achieved higher overall diagnostic accuracy (94.3%) than the radiologists (61.4% ± 5.3%). The radiologists reached average sensitivities for normal CXR, other type of pneumonia, and COVID-19 pneumonia of 85.0% ± 12.8%, 60.1% ± 12.2%, and 53.2% ± 11.2%, respectively, which were significantly lower than the results achieved by the algorithm (98.0%, 88.0%, and 97.0%; P < 0.00032). The mean PPVs for all 11 radiologists for the 3 categories were 82.4%, 59.0%, and 59.0% for the healthy, other pneumonia, and COVID-19 pneumonia, respectively, resulting in an F-score of 65.5% ± 12.4%, which was significantly lower than the F-score of the algorithm (94.3% ± 2.0%, P < 0.00001). When other pneumonia and COVID-19 pneumonia cases were pooled, the proposed system reached an accuracy of 95.7% for any pathology and the radiologists, 88.8%. The overall accuracy of consultants did not vary significantly compared with residents (65.0% ± 5.8% vs 67.4% ± 4.2%); however, consultants detected significantly more COVID-19 pneumonia cases (P = 0.008) and less healthy cases (P < 0.00001).

CONCLUSIONS : The system showed robust accuracy for COVID-19 pneumonia detection on CXR and surpassed radiologists at various training levels.

Fontanellaz Matthias, Ebner Lukas, Huber Adrian, Peters Alan, Löbelenz Laura, Hourscht Cynthia, Klaus Jeremias, Munz Jaro, Ruder Thomas, Drakopoulos Dionysios, Sieron Dominik, Primetis Elias, Heverhagen Johannes T, Mougiakakou Stavroula, Christe Andreas

2020-Nov-30

Radiology Radiology

Quantifying and leveraging predictive uncertainty for medical image assessment.

In Medical image analysis

The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.

Ghesu Florin C, Georgescu Bogdan, Mansoor Awais, Yoo Youngjin, Gibson Eli, Vishwanath R S, Balachandran Abishek, Balter James M, Cao Yue, Singh Ramandeep, Digumarthy Subba R, Kalra Mannudeep K, Grbic Sasa, Comaniciu Dorin

2020-Oct-14

Belief estimation, Building user trust, Classification uncertainty, Predictive uncertainty quantification, Sample rejection, Theory of evidence

General General

Structured layer surface segmentation for retina OCT using fully convolutional regression networks.

In Medical image analysis

Optical coherence tomography (OCT) is a noninvasive imaging modality with micrometer resolution which has been widely used for scanning the retina. Retinal layers are important biomarkers for many diseases. Accurate automated algorithms for segmenting smooth continuous layer surfaces with correct hierarchy (topology) are important for automated retinal thickness and surface shape analysis. State-of-the-art methods typically use a two step process. Firstly, a trained classifier is used to label each pixel into either background and layers or boundaries and non-boundaries. Secondly, the desired smooth surfaces with the correct topology are extracted by graph methods (e.g., graph cut). Data driven methods like deep networks have shown great ability for the pixel classification step, but to date have not been able to extract structured smooth continuous surfaces with topological constraints in the second step. In this paper, we combine these two steps into a unified deep learning framework by directly modeling the distribution of the surface positions. Smooth, continuous, and topologically correct surfaces are obtained in a single feed forward operation. The proposed method was evaluated on two publicly available data sets of healthy controls and subjects with either multiple sclerosis or diabetic macular edema, and is shown to achieve state-of-the art performance with sub-pixel accuracy.

He Yufan, Carass Aaron, Liu Yihao, Jedynak Bruno M, Solomon Sharon D, Saidha Shiv, Calabresi Peter A, Prince Jerry L

2020-Oct-14

Deep learning segmentation, Retina OCT, Surface segmentation

Pathology Pathology

Interpretable deep learning systems for multi-class segmentation and classification of non-melanoma skin cancer.

In Medical image analysis

We apply for the first-time interpretable deep learning methods simultaneously to the most common skin cancers (basal cell carcinoma, squamous cell carcinoma and intraepidermal carcinoma) in a histological setting. As these three cancer types constitute more than 90% of diagnoses, we demonstrate that the majority of dermatopathology work is amenable to automatic machine analysis. A major feature of this work is characterising the tissue by classifying it into 12 meaningful dermatological classes, including hair follicles, sweat glands as well as identifying the well-defined stratified layers of the skin. These provide highly interpretable outputs as the network is trained to represent the problem domain in the same way a pathologist would. While this enables a high accuracy of whole image classification (93.6-97.9%), by characterising the full context of the tissue we can also work towards performing routine pathologist tasks, for instance, orientating sections and automatically assessing and measuring surgical margins. This work seeks to inform ways in which future computer aided diagnosis systems could be applied usefully in a clinical setting with human interpretable outcomes.

Thomas Simon M, Lefevre James G, Baxter Glenn, Hamilton Nicholas A

2020-Nov-21

Classification, Computational Pathology, Deep learning, Machine learning, Segmentation, Skin cancer

General General

Toward real-time polyp detection using fully CNNs for 2D Gaussian shapes prediction.

In Medical image analysis

To decrease colon polyp miss-rate during colonoscopy, a real-time detection system with high accuracy is needed. Recently, there have been many efforts to develop models for real-time polyp detection, but work is still required to develop real-time detection algorithms with reliable results. We use single-shot feed-forward fully convolutional neural networks (F-CNN) to develop an accurate real-time polyp detection system. F-CNNs are usually trained on binary masks for object segmentation. We propose the use of 2D Gaussian masks instead of binary masks to enable these models to detect different types of polyps more effectively and efficiently and reduce the number of false positives. The experimental results showed that the proposed 2D Gaussian masks are efficient for detection of flat and small polyps with unclear boundaries between background and polyp parts. The masks make a better training effect to discriminate polyps from the polyp-like false positives. The proposed method achieved state-of-the-art results on two polyp datasets. On the ETIS-LARIB dataset we achieved 86.54% recall, 86.12% precision, and 86.33% F1-score, and on the CVC-ColonDB we achieved 91% recall, 88.35% precision, and F1-score 89.65%.

Qadir Hemin Ali, Shin Younghak, Solhusvik Johannes, Bergsland Jacob, Aabakken Lars, Balasingham Ilangko

2020-Nov-12

Colonoscopy, Convolutional neural networks, Deep learning, Polyp detection, Real-time detection