Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Application of MobileNetV2 to waste classification.

In PloS one ; h5-index 176.0

Today, the topic of waste separation has been raised for a long time, and some waste separation devices have been installed in large communities. However, the vast majority of domestic waste is still not properly sorted and put out, and the disposal of domestic waste still relies mostly on manual classification. The research in this paper applies deep learning to this persistent problem, which has important significance and impact. The domestic waste is classified into four categories: recyclable waste, kitchen waste, hazardous waste and other waste. The garbage classification model trained based on MobileNetV2 deep neural network can classify domestic garbage quickly and accurately, which can save a lot of labor, material and time costs. The absolute accuracy of the trained network model is 82.92%. In comparison with CNN network model, the classification accuracy of MobileNetV2 model is 15.42% higher than that of CNN model. In addition, the trained model is light enough to be better applied to mobile.

Yong Liying, Ma Le, Sun Dandan, Du Liping

2023

Radiology Radiology

Comparison of a machine and deep learning model for automated tumor annotation on digitized whole slide prostate cancer histology.

In PloS one ; h5-index 176.0

One in eight men will be affected by prostate cancer (PCa) in their lives. While the current clinical standard prognostic marker for PCa is the Gleason score, it is subject to inter-reviewer variability. This study compares two machine learning methods for discriminating between cancerous regions on digitized histology from 47 PCa patients. Whole-slide images were annotated by a GU fellowship-trained pathologist for each Gleason pattern. High-resolution tiles were extracted from annotated and unlabeled tissue. Patients were separated into a training set of 31 patients (Cohort A, n = 9345 tiles) and a testing cohort of 16 patients (Cohort B, n = 4375 tiles). Tiles from Cohort A were used to train a ResNet model, and glands from these tiles were segmented to calculate pathomic features to train a bagged ensemble model to discriminate tumors as (1) cancer and noncancer, (2) high- and low-grade cancer from noncancer, and (3) all Gleason patterns. The outputs of these models were compared to ground-truth pathologist annotations. The ensemble and ResNet models had overall accuracies of 89% and 88%, respectively, at predicting cancer from noncancer. The ResNet model was additionally able to differentiate Gleason patterns on data from Cohort B while the ensemble model was not. Our results suggest that quantitative pathomic features calculated from PCa histology can distinguish regions of cancer; however, texture features captured by deep learning frameworks better differentiate unique Gleason patterns.

Duenweg Savannah R, Brehler Michael, Bobholz Samuel A, Lowman Allison K, Winiarz Aleksandra, Kyereme Fitzgerald, Nencka Andrew, Iczkowski Kenneth A, LaViolette Peter S

2023

General General

Small training dataset convolutional neural networks for application-specific super-resolution microscopy.

In Journal of biomedical optics

SIGNIFICANCE : Machine learning (ML) models based on deep convolutional neural networks have been used to significantly increase microscopy resolution, speed [signal-to-noise ratio (SNR)], and data interpretation. The bottleneck in developing effective ML systems is often the need to acquire large datasets to train the neural network. We demonstrate how adding a "dense encoder-decoder" (DenseED) block can be used to effectively train a neural network that produces super-resolution (SR) images from conventional microscopy diffraction-limited (DL) images trained using a small dataset [15 fields of view (FOVs)].

AIM : The ML helps to retrieve SR information from a DL image when trained with a massive training dataset. The aim of this work is to demonstrate a neural network that estimates SR images from DL images using modifications that enable training with a small dataset.

APPROACH : We employ "DenseED" blocks in existing SR ML network architectures. DenseED blocks use a dense layer that concatenates features from the previous convolutional layer to the next convolutional layer. DenseED blocks in fully convolutional networks (FCNs) estimate the SR images when trained with a small training dataset (15 FOVs) of human cells from the Widefield2SIM dataset and in fluorescent-labeled fixed bovine pulmonary artery endothelial cells samples.

RESULTS : Conventional ML models without DenseED blocks trained on small datasets fail to accurately estimate SR images while models including the DenseED blocks can. The average peak SNR (PSNR) and resolution improvements achieved by networks containing DenseED blocks are 3.2    dB and 2 × , respectively. We evaluated various configurations of target image generation methods (e.g., experimentally captured a target and computationally generated target) that are used to train FCNs with and without DenseED blocks and showed that including DenseED blocks in simple FCNs outperforms compared to simple FCNs without DenseED blocks.

CONCLUSIONS : DenseED blocks in neural networks show accurate extraction of SR images even if the ML model is trained with a small training dataset of 15 FOVs. This approach shows that microscopy applications can use DenseED blocks to train on smaller datasets that are application-specific imaging platforms and there is promise for applying this to other imaging modalities, such as MRI/x-ray, etc.

Mannam Varun, Howard Scott

2023-Mar

biomedical imaging, convolutional neural networks, dense encoder-decoder, dense layer, diffraction-limited, fluorescence microscopy, fully convolutional networks, generative adversarial networks, machine learning, small datasets, super-resolution

Internal Medicine Internal Medicine

Artificial intelligence-based point-of-care lung ultrasound for screening COVID-19 pneumoniae: Comparison with CT scans.

In PloS one ; h5-index 176.0

BACKGROUND : Although lung ultrasound has been reported to be a portable, cost-effective, and accurate method to detect pneumonia, it has not been widely used because of the difficulty in its interpretation. Here, we aimed to investigate the effectiveness of a novel artificial intelligence-based automated pneumonia detection method using point-of-care lung ultrasound (AI-POCUS) for the coronavirus disease 2019 (COVID-19).

METHODS : We enrolled consecutive patients admitted with COVID-19 who underwent computed tomography (CT) in August and September 2021. A 12-zone AI-POCUS was performed by a novice observer using a pocket-size device within 24 h of the CT scan. Fifteen control subjects were also scanned. Additionally, the accuracy of the simplified 8-zone scan excluding the dorsal chest, was assessed. More than three B-lines detected in one lung zone were considered zone-level positive, and the presence of positive AI-POCUS in any lung zone was considered patient-level positive. The sample size calculation was not performed given the retrospective all-comer nature of the study.

RESULTS : A total of 577 lung zones from 56 subjects (59.4 ± 14.8 years, 23% female) were evaluated using AI-POCUS. The mean number of days from disease onset was 9, and 14% of patients were under mechanical ventilation. The CT-validated pneumonia was seen in 71.4% of patients at total 577 lung zones (53.3%). The 12-zone AI-POCUS for detecting CT-validated pneumonia in the patient-level showed the accuracy of 94.5% (85.1%- 98.1%), sensitivity of 92.3% (79.7%- 97.3%), specificity of 100% (80.6%- 100%), positive predictive value of 95.0% (89.6% - 97.7%), and Kappa of 0.33 (0.27-0.40). When simplified with 8-zone scan, the accuracy, sensitivity, and sensitivity were 83.9% (72.2%- 91.3%), 77.5% (62.5%- 87.7%), and 100% (80.6%- 100%), respectively. The zone-level accuracy, sensitivity, and specificity of AI-POCUS were 65.3% (61.4%- 69.1%), 37.2% (32.0%- 42.7%), and 97.8% (95.2%- 99.0%), respectively.

INTERPRETATION : AI-POCUS using the novel pocket-size ultrasound system showed excellent agreement with CT-validated COVID-19 pneumonia, even when used by a novice observer.

Kuroda Yumi, Kaneko Tomohiro, Yoshikawa Hitomi, Uchiyama Saori, Nagata Yuichi, Matsushita Yasushi, Hiki Makoto, Minamino Tohru, Takahashi Kazuhisa, Daida Hiroyuki, Kagiyama Nobuyuki

2023

Ophthalmology Ophthalmology

Visual electrophysiology and "the potential of the potentials".

In Eye (London, England) ; h5-index 41.0

Visual electrophysiology affords direct, quantitative, objective assessment of visual pathway function at different levels, and thus yields information complementary to, and not necessarily obtainable from, imaging or psychophysical testing. The tests available, and their indications, have evolved, with many advances, both in technology and in our understanding of the neural basis of the waveforms, now facilitating more precise evaluation of physiology and pathophysiology. After summarising the visual pathway and current standard clinical testing methods, this review discusses, non-exhaustively, several developments, focusing particularly on human electroretinogram recordings. These include new devices (portable, non-mydiatric, multimodal), novel testing protocols (including those aiming to separate rod-driven and cone-driven responses, and to monitor retinal adaptation), and developments in methods of analysis, including use of modelling and machine learning. It is likely that several tests will become more accessible and useful in both clinical and research settings. In future, these methods will further aid our understanding of common and rare eye disease, will help in assessing novel therapies, and will potentially yield information relevant to neurological and neuro-psychiatric conditions.

Mahroo Omar A

2023-Mar-16

Ophthalmology Ophthalmology

A deep learning-based framework for retinal fundus image enhancement.

In PloS one ; h5-index 176.0

PROBLEM : Low-quality fundus images with complex degredation can cause costly re-examinations of patients or inaccurate clinical diagnosis.

AIM : This study aims to create an automatic fundus macular image enhancement framework to improve low-quality fundus images and remove complex image degradation.

METHOD : We propose a new deep learning-based model that automatically enhances low-quality retinal fundus images that suffer from complex degradation. We collected a dataset, comprising 1068 pairs of high-quality (HQ) and low-quality (LQ) fundus images from the Kangbuk Samsung Hospital's health screening program and ophthalmology department from 2017 to 2019. Then, we used these dataset to develop data augmentation methods to simulate major aspects of retinal image degradation and to propose a customized convolutional neural network (CNN) architecture to enhance LQ images, depending on the nature of the degradation. Peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), r-value (linear index of fuzziness), and proportion of ungradable fundus photographs before and after the enhancement process are calculated to assess the performance of proposed model. A comparative evaluation is conducted on an external database and four different open-source databases.

RESULTS : The results of the evaluation on the external test dataset showed an significant increase in PSNR and SSIM compared with the original LQ images. Moreover, PSNR and SSIM increased by over 4 dB and 0.04, respectively compared with the previous state-of-the-art methods (P < 0.05). The proportion of ungradable fundus photographs decreased from 42.6% to 26.4% (P = 0.012).

CONCLUSION : Our enhancement process improves LQ fundus images that suffer from complex degradation significantly. Moreover our customized CNN achieved improved performance over the existing state-of-the-art methods. Overall, our framework can have a clinical impact on reducing re-examinations and improving the accuracy of diagnosis.

Lee Kang Geon, Song Su Jeong, Lee Soochahn, Yu Hyeong Gon, Kim Dong Ik, Lee Kyoung Mu

2023