Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Radiology Radiology

Validation of cervical vertebral maturation stages: Artificial intelligence vs human observer visual analysis.

In American journal of orthodontics and dentofacial orthopedics : official publication of the American Association of Orthodontists, its constituent societies, and the American Board of Orthodontics

INTRODUCTION : This study aimed to develop an artificial neural network (ANN) model for cervical vertebral maturation (CVM) analysis and validate the model's output with the results of human observers.

METHODS : A total of 647 lateral cephalograms were selected from patients with 10-30 years of chronological age (mean ± standard deviation, 15.36 ± 4.13 years). New software with a decision support system was developed for manual labeling of the dataset. A total of 26 points were marked on each radiograph. The CVM stages were saved on the basis of the final decision of the observer. Fifty-four image features were saved in text format. A new subset of 72 radiographs was created according to the classification result, and these 72 radiographs were visually evaluated by 4 observers. Weighted kappa (wκ) and Cohen's kappa (cκ) coefficients and percentage agreement were calculated to evaluate the compatibility of the results.

RESULTS : Intraobserver agreement ranges were as follows: wκ = 0.92-0.98, cκ = 0.65-0.85, and 70.8%-87.5%. Interobserver agreement ranges were as follows: wκ = 0.76-0.92, cκ = 0.4-0.65, and 50%-72.2%. Agreement between the ANN model and observers 1, 2, 3, and 4 were as follows: wκ = 0.85 (cκ = 0.52, 59.7%), wκ = 0.8 (cκ = 0.4, 50%), wκ = 0.87 (cκ = 0.55, 62.5%), and wκ = 0.91 (cκ = 0.53, 61.1%), respectively (P <0.001). An average of 58.3% agreement was observed between the ANN model and the human observers.

CONCLUSIONS : This study demonstrated that the developed ANN model performed close to, if not better than, human observers in CVM analysis. By generating new algorithms, automatic classification of CVM with artificial intelligence may replace conventional evaluation methods used in the future.

Amasya Hakan, Cesur Emre, Yıldırım Derya, Orhan Kaan

2020-Dec

General General

CPAS: the UK's national machine learning-based hospital capacity planning system for COVID-19.

In Machine learning

The coronavirus disease 2019 (COVID-19) global pandemic poses the threat of overwhelming healthcare systems with unprecedented demands for intensive care resources. Managing these demands cannot be effectively conducted without a nationwide collective effort that relies on data to forecast hospital demands on the national, regional, hospital and individual levels. To this end, we developed the COVID-19 Capacity Planning and Analysis System (CPAS)-a machine learning-based system for hospital resource planning that we have successfully deployed at individual hospitals and across regions in the UK in coordination with NHS Digital. In this paper, we discuss the main challenges of deploying a machine learning-based decision support system at national scale, and explain how CPAS addresses these challenges by (1) defining the appropriate learning problem, (2) combining bottom-up and top-down analytical approaches, (3) using state-of-the-art machine learning algorithms, (4) integrating heterogeneous data sources, and (5) presenting the result with an interactive and transparent interface. CPAS is one of the first machine learning-based systems to be deployed in hospitals on a national scale to address the COVID-19 pandemic-we conclude the paper with a summary of the lessons learned from this experience.

Qian Zhaozhi, Alaa Ahmed M, van der Schaar Mihaela

2020-Nov-24

Automated machine learning, COVID-19, Compartmental models, Gaussian processes, Healthcare, Resource planning

Pathology Pathology

Semi-Supervised Noisy Student Pre-training on EfficientNet Architectures for Plant Pathology Classification

ArXiv Preprint

In recent years, deep learning has vastly improved the identification and diagnosis of various diseases in plants. In this report, we investigate the problem of pathology classification using images of a single leaf. We explore the use of standard benchmark models such as VGG16, ResNet101, and DenseNet 161 to achieve a 0.945 score on the task. Furthermore, we explore the use of the newer EfficientNet model, improving the accuracy to 0.962. Finally, we introduce the state-of-the-art idea of semi-supervised Noisy Student training to the EfficientNet, resulting in significant improvements in both accuracy and convergence rate. The final ensembled Noisy Student model performs very well on the task, achieving a test score of 0.982.

Sedrick Scott Keh

2020-12-01

General General

Detecting functional field units from satellite images in smallholder farming systems using a deep learning based computer vision approach: A case study from Bangladesh.

In Remote sensing applications : society and environment

Improving agricultural productivity of smallholder farms (which are typically less than 2 ha) is key to food security for millions of people in developing nations. Knowledge of the size and location of crop fields forms the basis for crop statistics, yield forecasting, resource allocation, economic planning, and for monitoring the effectiveness of development interventions and investments. We evaluated three different full convolutional neural network (F-CNN) models (U-Net, SegNet, and DenseNet) with deep neural architecture to detect functional field boundaries from the very high resolution (VHR) WorldView-3 satellite imagery from Southern Bangladesh. The precision of the three F-CNN was up to 0.8, and among the three F-CNN models, the highest precision, recalls, and F-1 score was obtained using a DenseNet model. This architecture provided the highest area under the receiver operating characteristic (ROC) curve (AUC) when tested with independent images. We also found that 4-channel images (blue, green, red, and near-infrared) provided small gains in performance when compared to 3-channel images (blue, green, and red). Our results indicate the potential of using CNN based computer vision techniques to detect field boundaries of small, irregularly shaped agricultural fields.

Yang Ruoyu, Ahmed Zia U, Schulthess Urs C, Kamal Mustafa, Rai Rahul

2020-Nov

CNN, Deep learning, Field boundaries, Smallholder farming

General General

Alzheimer's Disease Classification With a Cascade Neural Network.

In Frontiers in public health

Classification of Alzheimer's Disease (AD) has been becoming a hot issue along with the rapidly increasing number of patients. This task remains tremendously challenging due to the limited data and the difficulties in detecting mild cognitive impairment (MCI). Existing methods use gait [or EEG (electroencephalogram)] data only to tackle this task. Although the gait data acquisition procedure is cheap and simple, the methods relying on gait data often fail to detect the slight difference between MCI and AD. The methods that use EEG data can detect the difference more precisely, but collecting EEG data from both HC (health controls) and patients is very time-consuming. More critically, these methods often convert EEG records into the frequency domain and thus inevitably lose the spatial and temporal information, which is essential to capture the connectivity and synchronization among different brain regions. This paper proposes a cascade neural network with two steps to achieve a faster and more accurate AD classification by exploiting gait and EEG data simultaneously. In the first step, we propose attention-based spatial temporal graph convolutional networks to extract the features from the skeleton sequences (i.e., gait) captured by Kinect (a commonly used sensor) to distinguish between HC and patients. In the second step, we propose spatial temporal convolutional networks to fully exploit the spatial and temporal information of EEG data and classify the patients into MCI or AD eventually. We collect gait and EEG data from 35 cognitively health controls, 35 MCI, and 17 AD patients to evaluate our proposed method. Experimental results show that our method significantly outperforms other AD diagnosis methods (91.07 vs. 68.18%) in the three-way AD classification task (HC, MCI, and AD). Moreover, we empirically found that the lower body and right upper limb are more important for the early diagnosis of AD than other body parts. We believe this interesting finding can be helpful for clinical researches.

You Zeng, Zeng Runhao, Lan Xiaoyong, Ren Huixia, You Zhiyang, Shi Xue, Zhao Shipeng, Guo Yi, Jiang Xin, Hu Xiping

2020

“Alzheimers disease”, EEG, automatic diagnosis, deep learning, gait

Radiology Radiology

Deep Efficient End-to-end Reconstruction (DEER) Network for Few-view Breast CT Image Reconstruction.

In IEEE access : practical innovations, open solutions

Breast CT provides image volumes with isotropic resolution in high contrast, enabling detection of small calcification (down to a few hundred microns in size) and subtle density differences. Since breast is sensitive to x-ray radiation, dose reduction of breast CT is an important topic, and for this purpose, few-view scanning is a main approach. In this article, we propose a Deep Efficient End-to-end Reconstruction (DEER) network for few-view breast CT image reconstruction. The major merits of our network include high dose efficiency, excellent image quality, and low model complexity. By the design, the proposed network can learn the reconstruction process with as few as O ( N ) parameters, where N is the side length of an image to be reconstructed, which represents orders of magnitude improvements relative to the state-of-the-art deep-learning-based reconstruction methods that map raw data to tomographic images directly. Also, validated on a cone-beam breast CT dataset prepared by Koning Corporation on a commercial scanner, our method demonstrates a competitive performance over the state-of-the-art reconstruction networks in terms of image quality. The source code of this paper is available at: https://github.com/HuidongXie/DEER.

Xie Huidong, Shan Hongming, Cong Wenxiang, Liu Chi, Zhang Xiaohua, Liu Shaohua, Ning Ruola, Wang G E

2020

Breast CT, Deep learning, Few-view CT, Low-dose CT, X-ray CT