Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Cropformer: A new generalized deep learning classification approach for multi-scenario crop classification.

In Frontiers in plant science

Accurate and efficient crop classification using remotely sensed data can provide fundamental and important information for crop yield estimation. Existing crop classification approaches are usually designed to be strong in some specific scenarios but not for multi-scenario crop classification. In this study, we proposed a new deep learning approach for multi-scenario crop classification, named Cropformer. Cropformer can extract global features and local features, to solve the problem that current crop classification methods extract a single feature. Specifically, Cropformer is a two-step classification approach, where the first step is self-supervised pre-training to accumulate knowledge of crop growth, and the second step is a fine-tuned supervised classification based on the weights from the first step. The unlabeled time series and the labeled time series are used as input for the first and second steps respectively. Multi-scenario crop classification experiments including full-season crop classification, in-season crop classification, few-sample crop classification, and transfer of classification models were conducted in five study areas with complex crop types and compared with several existing competitive approaches. Experimental results showed that Cropformer can not only obtain a very significant accuracy advantage in crop classification, but also can obtain higher accuracy with fewer samples. Compared to other approaches, the classification performance of Cropformer during model transfer and the efficiency of the classification were outstanding. The results showed that Cropformer could build up a priori knowledge using unlabeled data and learn generalized features using labeled data, making it applicable to crop classification in multiple scenarios.

Wang Hengbin, Chang Wanqiu, Yao Yu, Yao Zhiying, Zhao Yuanyuan, Li Shaoming, Liu Zhe, Zhang Xiaodong

2023

Cropformer, deep learning, multi-scenario crop classification, pre-training, time series

General General

A learning-based image processing approach for pulse wave velocity estimation using spectrogram from peripheral pulse wave signals: An in silico study.

In Frontiers in physiology

Carotid-to-femoral pulse wave velocity (cf-PWV) is considered a critical index to evaluate arterial stiffness. For this reason, estimating Carotid-to-femoral pulse wave velocity (cf-PWV) is essential for diagnosing and analyzing different cardiovascular diseases. Despite its broader adoption in the clinical routine, the measurement process of carotid-to-femoral pulse wave velocity is considered a demanding task for clinicians and patients making it prone to inaccuracies and errors in the estimation. A smart non-invasive, and peripheral measurement of carotid-to-femoral pulse wave velocity could overcome the challenges of the classical assessment process and improve the quality of patient care. This paper proposes a novel methodology for the carotid-to-femoral pulse wave velocity estimation based on the use of the spectrogram representation from single non-invasive peripheral pulse wave signals [photoplethysmography (PPG) or blood pressure (BP)]. This methodology was tested using three feature extraction methods based on the semi-classical signal analysis (SCSA) method, the Law's mask for texture energy extraction, and the central statistical moments. Finally, each feature method was fed into different machine learning models for the carotid-to-femoral pulse wave velocity estimation. The proposed methodology obtained an $R2\geq0.90$ for all the peripheral signals for the noise-free case using the MLP model, and for the different noise levels added to the original signal, the SCSA-based features with the MLP model presented an $R2\geq0.91$ for all the peripheral signals at the level of noise. These results provide evidence of the capacity of spectrogram representation for efficiently assessing the carotid-to-femoral pulse wave velocity estimation using different feature methods. Future work will be done toward testing the proposed methodology for in-vivo signals.

Vargas Juan M, Bahloul Mohamed A, Laleg-Kirati Taous-Meriem

2023

PPG, distal blood pressure, image processing, machine learning (ML), pulse wave velocity, semi-classical signal analysis, spectrogram

General General

COVID-19 and pneumonia diagnosis from chest X-ray images using convolutional neural networks.

In Network modeling and analysis in health informatics and bioinformatics

X-ray is a useful imaging modality widely utilized for diagnosing COVID-19 virus that infected a high number of people all around the world. The manual examination of these X-ray images may cause problems especially when there is lack of medical staff. Usage of deep learning models is known to be helpful for automated diagnosis of COVID-19 from the X-ray images. However, the widely used convolutional neural network architectures typically have many layers causing them to be computationally expensive. To address these problems, this study aims to design a lightweight differential diagnosis model based on convolutional neural networks. The proposed model is designed to classify the X-ray images belonging to one of the four classes that are Healthy, COVID-19, viral pneumonia, and bacterial pneumonia. To evaluate the model performance, accuracy, precision, recall, and F1-Score were calculated. The performance of the proposed model was compared with those obtained by applying transfer learning to the widely used convolutional neural network models. The results showed that the proposed model with low number of computational layers outperforms the pre-trained benchmark models, achieving an accuracy value of 89.89% while the best pre-trained model (Efficient-Net B2) achieved accuracy of 85.7%. In conclusion, the proposed lightweight model achieved the best overall result in classifying lung diseases allowing it to be used on devices with limited computational power. On the other hand, all the models showed a poor precision on viral pneumonia class and confusion in distinguishing it from bacterial pneumonia class, thus a decrease in the overall accuracy.

Hariri Muhab, Avşar Ercan

2023

COVID-19, Classification, Convolutional neural networks, Deep learning, Lung diseases, Transfer learning

General General

Study on the nitrogen content estimation model of cotton leaves based on "image-spectrum-fluorescence" data fusion.

In Frontiers in plant science

OBJECTIVE : Precise monitoring of cotton leaves' nitrogen content is important for increasing yield and reducing fertilizer application. Spectra and images are used to monitor crop nitrogen information. However, the information expressed using nitrogen monitoring based on a single data source is limited and cannot consider the expression of various phenotypic and physiological parameters simultaneously, which can affect the accuracy of inversion. Introducing a multi-source data-fusion mechanism can improve the accuracy and stability of cotton nitrogen content monitoring from the perspective of information complementarity.

METHODS : Five nitrogen treatments were applied to the test crop, Xinluzao No. 53 cotton, grown indoors. Cotton leaf hyperspectral, chlorophyll fluorescence, and digital image data were collected and screened. A multilevel data-fusion model combining multiple machine learning and stacking integration learning was built from three dimensions: feature-level fusion, decision-level fusion, and hybrid fusion.

RESULTS : The determination coefficients (R2) of the feature-level fusion, decision-level fusion, and hybrid-fusion models were 0.752, 0.771, and 0.848, and the root-mean-square errors (RMSE) were 3.806, 3.558, and 2.898, respectively. Compared with the nitrogen estimation models of the three single data sources, R2 increased by 5.0%, 6.8%, and 14.6%, and the RMSE decreased by 3.2%, 9.5%, and 26.3%, respectively.

CONCLUSION : The multilevel fusion model can improve accuracy to varying degrees, and the accuracy and stability were highest with the hybrid-fusion model; these results provide theoretical and technical support for optimizing an accurate method of monitoring cotton leaf nitrogen content.

Qin Shizhe, Ding Yiren, Zhou Zexuan, Zhou Meng, Wang Hongyu, Xu Feng, Yao Qiushuang, Lv Xin, Zhang Ze, Zhang Lifu

2023

chlorophyll fluorescence, cotton, data fusion, digital images, hyperspectral, nitrogen

Radiology Radiology

Deep-learning convolutional neural network-based scatter correction for contrast enhanced digital breast tomosynthesis in both cranio-caudal and mediolateral-oblique views.

In Journal of medical imaging (Bellingham, Wash.)

PURPOSE : Scatter radiation in contrast-enhanced digital breast tomosynthesis (CEDBT) reduces the image quality and iodinated lesion contrast. Monte Carlo simulation can provide accurate scatter estimation at the cost of computational burden. A model-based convolutional method trades off accuracy for processing speed. The purpose of this study is to develop a fast and robust deep-learning (DL) convolutional neural network (CNN)-based scatter correction method for CEDBT.

APPROACH : Projection images and scatter maps of digital anthropomorphic breast phantoms were generated using Monte Carlo simulations. Experiments were conducted to validate the simulated scatter-to-primary ratio (SPR) at different locations of a breast phantom. Simulated projection images were used for CNN training and testing. Two separate U-Nets [low-energy (LE)-CNN and high-energy (HE)-CNN] were trained for LE and HE spectrum, respectively. CNN-based scatter correction was applied to a clinical case with a malignant iodinated mass to evaluate the influence on the lesion detection.

RESULTS : The average and standard deviation of mean absolute percentage error of LE-CNN and HE-CNN estimated scatter map are 2 % ± 0.4 % and 2.4 % ± 0.8 % , respectively. For clinical cases, the lesion signal difference to noise ratio average improvement was 190% after CNN-based scatter correction. To conduct scatter correction on clinical CEDBT images, the whole process of loading CNNs parameters and scatter correction for LE and HE images took < 4    s , with 9 GB GPU computational cost. The SPR variation across the breast agrees between experimental measurements and Monte Carlo simulations.

CONCLUSIONS : We developed a CNN-based scatter correction method for CEDBT in both CC view and mediolateral-oblique view with high accuracy and fast speed.

Duan Xiaoyu, Sahu Pranjal, Huang Hailiang, Zhao Wei

2023-Feb

contrast-enhanced digital breast tomosynthesis, convolutional neural network, scatter correction

General General

Adversarial-based latent space alignment network for left atrial appendage segmentation in transesophageal echocardiography images.

In Frontiers in cardiovascular medicine

Left atrial appendage (LAA) is a leading cause of atrial fibrillation and thrombosis in cardiovascular disease. Clinicians can rely on LAA occlusion (LAAO) to effectively prevent and treat ischaemic strokes attributed to the LAA. The correct selection of the LAAO is one of the most critical stages in the successful surgical process, which relies on the quantification of the anatomical structure of the LAA for successful intervention in LAAO. In this paper, we propose an adversarial-based latent space alignment framework for LAA segmentation in transesophageal echocardiography (TEE) images by introducing prior knowledge from the label. The proposed method consists of an LAA segmentation network, a label reconstruction network, and a latent space alignment loss. To be specific, we first employ ConvNeXt as the backbone of the segmentation and reconstruction network to enhance the feature extraction capability of the encoder. The label reconstruction network then encodes the prior shape features from the LAA labels to the latent space. The latent space alignment loss consists of the adversarial-based alignment and the contrast learning losses. It can motivate the segmentation network to learn the prior shape features of the labels, thus improving the accuracy of LAA edge segmentation. The proposed method was evaluated on a TEE dataset including 1,783 images and the experimental results showed that the proposed method outperformed other state-of-the-art LAA segmentation methods with Dice coefficient, AUC, ACC, G-mean, and Kappa of 0.831, 0.917, 0.989, 0.911, and 0.825, respectively.

Zhu Xueli, Zhang Shengmin, Hao Huaying, Zhao Yitian

2023

deep learning, latent space, left atrial appendage, segmentation, transesophageal echocardiography