Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

ARTIFICIAL INTELLIGENCE AND ANORECTAL MANOMETRY: AUTOMATIC DETECTION AND DIFFERENTIATION OF ANORECTAL MOTILITY PATTERNS - A PROOF OF CONCEPT STUDY.

In Clinical and translational gastroenterology

BACKGROUND : Anorectal manometry (ARM) is the gold standard for the evaluation of anorectal functional disorders, prevalent in the population. Nevertheless, the accessibility to this exam is limited, and the complexity of data analysis and report is a significant drawback. This pilot study aimed to develop and validate an artificial intelligence (AI) model to automatically differentiate motility patterns of fecal incontinence (FI) from obstructed defecation (OD) using ARM data.

METHODS : We developed and tested multiple machine learning algorithms for the automatic interpretation of ARM data. Four models were tested: k-nearest neighbors (KNN), support vector machines (SVM), random forests (RF) and gradient boosting (xGB). These models were trained using a stratified 5-fold strategy. Their performance was assessed after fine-tuning of each model's hyperparameters, using 90% of data for training and 10% of data for testing.

RESULTS : A total of 827 ARM exams were used in this study. After fine-tuning, the xGB model presented an overall accuracy (84.6% ± 2.9%), similar to that of RF (82.7% ± 4.8%) and SVM (81.0% ± 8.0%), and higher that of KNN (74.4% ± 3.8%). The xGB models showed the highest discriminating performance between OD and FI, with an area under the curve of 0.939.

CONCLUSION : The tested machine learning algorithms, particularly the xGB model, accurately differentiated between FI and OD manometric patterns. Subsequent development of these tools may optimize the access to ARM studies, which may have a significant impact on the management of patients with anorectal functional diseases.

Saraiva Miguel Mascarenhas, Pouca Maria Vila, Ribeiro Tiago, Afonso João, Cardoso Hélder, Sousa Pedro, Ferreira João, Macedo Guilherme, Junior Ilario Froehner

2022-Dec-15

Pathology Pathology

Improving quality control in the routine practice for histopathological interpretation of gastrointestinal endoscopic biopsies using artificial intelligence.

In PloS one ; h5-index 176.0

BACKGROUND : Colorectal and gastric cancer are major causes of cancer-related deaths. In Korea, gastrointestinal (GI) endoscopic biopsy specimens account for a high percentage of histopathologic examinations. Lack of a sufficient pathologist workforce can cause an increase in human errors, threatening patient safety. Therefore, we developed a digital pathology total solution combining artificial intelligence (AI) classifier models and pathology laboratory information system for GI endoscopic biopsy specimens to establish a post-analytic daily fast quality control (QC) system, which was applied in clinical practice for a 3-month trial run by four pathologists.

METHODS AND FINDINGS : Our whole slide image (WSI) classification framework comprised patch-generator, patch-level classifier, and WSI-level classifier. The classifiers were both based on DenseNet (Dense Convolutional Network). In laboratory tests, the WSI classifier achieved accuracy rates of 95.8% and 96.0% in classifying histopathological WSIs of colorectal and gastric endoscopic biopsy specimens, respectively, into three classes (Negative for dysplasia, Dysplasia, and Malignant). Classification by pathologic diagnosis and AI prediction were compared and daily reviews were conducted, focusing on discordant cases for early detection of potential human errors by the pathologists, allowing immediate correction, before the pathology report error is conveyed to the patients. During the 3-month AI-assisted daily QC trial run period, approximately 7-10 times the number of slides compared to that in the conventional monthly QC (33 months) were reviewed by pathologists; nearly 100% of GI endoscopy biopsy slides were double-checked by the AI models. Further, approximately 17-30 times the number of potential human errors were detected within an average of 1.2 days.

CONCLUSIONS : The AI-assisted daily QC system that we developed and established demonstrated notable improvements in QC, in quantitative, qualitative, and time utility aspects. Ultimately, we developed an independent AI-assisted post-analytic daily fast QC system that was clinically applicable and influential, which could enhance patient safety.

Ko Young Sin, Choi Yoo Mi, Kim Mujin, Park Youngjin, Ashraf Murtaza, Quiñones Robles Willmer Rafell, Kim Min-Ju, Jang Jiwook, Yun Seokju, Hwang Yuri, Jang Hani, Yi Mun Yong

2022

General General

ULD-Net: 3D unsupervised learning by dense similarity learning with equivariant-crop.

In Journal of the Optical Society of America. A, Optics, image science, and vision

Although many recent deep learning methods have achieved good performance in point cloud analysis, most of them are built upon the heavy cost of manual labeling. Unsupervised representation learning methods have attracted increasing attention due to their high label efficiency. How to learn more useful representations from unlabeled 3D point clouds is still a challenging problem. Addressing this problem, we propose a novel unsupervised learning approach for point cloud analysis, named ULD-Net, consisting of an equivariant-crop (equiv-crop) module to achieve dense similarity learning. We propose dense similarity learning that maximizes consistency across two randomly transformed global-local views at both the instance level and the point level. To build feature correspondence between global and local views, an equiv-crop is proposed to transform features from the global scope to the local scope. Unlike previous methods that require complicated designs, such as negative pairs and momentum encoders, our ULD-Net benefits from the simple Siamese network that relies solely on stop-gradient operation preventing the network from collapsing. We also utilize the feature separability constraint for more representative embeddings. Experimental results show that our ULD-Net achieves the best results of context-based unsupervised methods and comparable performances to supervised models in shape classification and segmentation tasks. On the linear support vector machine classification benchmark, our ULD-Net surpasses the best context-based method spatiotemporal self-supervised representation learning (STRL) by 1.1% overall accuracy. On tasks with fine-tuning, our ULD-Net outperforms STRL under fully supervised and semisupervised settings, in particular, 0.1% accuracy gain on the ModelNet40 classification benchmark, and 0.6% medium intersection of union gain on the ShapeNet part segmentation benchmark.

Tian Yu, Song Da, Yang Mengna, Liu Jie, Geng Guohua, Zhou Mingquan, Li Kang, Cao Xin

2022-Dec-01

General General

One-to-all lightweight Fourier channel attention convolutional neural network for speckle reconstructions.

In Journal of the Optical Society of America. A, Optics, image science, and vision

Speckle reconstruction is a classical inverse problem in computational imaging. Inspired by the memory effect of the scattering medium, deep learning methods reveal excellent performance in extracting the correlation of speckle patterns. Nowadays, advanced models generally include more than 10M parameters and mostly pay more attention to the spatial feature information. However, the frequency domain of images also contains precise hierarchical representations. Here we propose a one-to-all lightweight Fourier channel attention convolutional neural network (FCACNN) with Fourier channel attention and the res-connected bottleneck structure. Compared with the state-of-the-art model, i.e., self-attention armed convolutional neural network (SACNN), our architecture has better feature extraction and reconstruction ability. The Pearson correlation coefficient and Jaccard index scores of FCACNN increased by at least 5.2% and 13.6% compared with task-related models. And the parameter number of the lightweight FCACNN is only 1.15M. Furthermore, the validation results show that the one-to-all model, FCACNN, has excellent generalization capability on unseen speckle patterns such as handwritten letters and Quickdraws.

Lan Botian, Wang Hao, Wang Yangyundou

2022-Dec-01

General General

Multiscale feature pyramid network based on activity level weight selection for infrared and visible image fusion.

In Journal of the Optical Society of America. A, Optics, image science, and vision

At present, deep-learning-based infrared and visible image fusion methods have the problem of extracting insufficient source image features, causing imbalanced infrared and visible information in fused images. To solve the problem, a multiscale feature pyramid network based on activity level weight selection (MFPN-AWS) with a complete downsampling-upsampling structure is proposed. The network consists of three parts: a downsampling convolutional network, an AWS fusion layer, and an upsampling convolutional network. First, multiscale deep features are extracted by downsampling convolutional networks, obtaining rich information of intermediate layers. Second, AWS highlights the advantages of the l1-norm and global pooling dual fusion strategy to describe the characteristics of target saliency and texture detail, and effectively balances the multiscale infrared and visible features. Finally, multiscale fused features are reconstructed by the upsampling convolutional network to obtain fused images. Compared with nine state-of-the-art methods via the publicly available experimental datasets TNO and VIFB, MFPN-AWS reaches more natural and balanced fusion results, such as better overall clarity and salient targets, and achieves optimal values on two metrics: mutual information and visual fidelity.

Xu Rui, Liu Gang, Xie Yuning, Prasad Bavirisetti Durga, Qian Yao, Xing Mengliang

2022-Dec-01

General General

Eleven quick tips for data cleaning and feature engineering.

In PLoS computational biology

Applying computational statistics or machine learning methods to data is a key component of many scientific studies, in any field, but alone might not be sufficient to generate robust and reliable outcomes and results. Before applying any discovery method, preprocessing steps are necessary to prepare the data to the computational analysis. In this framework, data cleaning and feature engineering are key pillars of any scientific study involving data analysis and that should be adequately designed and performed since the first phases of the project. We call "feature" a variable describing a particular trait of a person or an observation, recorded usually as a column in a dataset. Even if pivotal, these data cleaning and feature engineering steps sometimes are done poorly or inefficiently, especially by beginners and unexperienced researchers. For this reason, we propose here our quick tips for data cleaning and feature engineering on how to carry out these important preprocessing steps correctly avoiding common mistakes and pitfalls. Although we designed these guidelines with bioinformatics and health informatics scenarios in mind, we believe they can more in general be applied to any scientific area. We therefore target these guidelines to any researcher or practitioners wanting to perform data cleaning or feature engineering. We believe our simple recommendations can help researchers and scholars perform better computational analyses that can lead, in turn, to more solid outcomes and more reliable discoveries.

Chicco Davide, Oneto Luca, Tavazzi Erica

2022-Dec