Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

DAN-Net: Dual-domain adaptive-scaling non-local network for CT metal artifact reduction.

In Physics in medicine and biology

Metal implants can heavily attenuate X-rays in computed tomography (CT) scans, leading to severe artifacts in reconstructed images, which significantly jeopardize image quality and negatively impact subsequent diagnoses and treatment planning. With the rapid development of deep learning in the field of medical imaging, several network models have been proposed for metal artifact reduction (MAR) in CT. Despite the encouraging results achieved by these methods, there is still much room to further improve performance. In this paper, a novel Dual-domain Adaptive-scaling Non-local network (DAN-Net) for MAR. We correct the corrupted sinogram using adaptive scaling first to preserve more tissue and bone details as a more informative input. Then, an end-to-end dual-domain network is adopted to successively process the sinogram and its corresponding reconstructed image generated by the analytical reconstruction layer. In addition, to better suppress the existing artifacts and restrain the potential secondary artifacts caused by inaccurate results of the sinogram-domain network, a novel residual sinogram learning strategy and nonlocal module are leveraged in the proposed network model. In the experiments, the proposed DAN-Net demonstrates performance competitive with several state-of-the-art MAR methods in both qualitative and quantitative aspects.

Wang Tao, Xia Wenjun, Huang Yongqiang, Sun Huaiqiang, Liu Yan, Chen Hu, Zhou Jiliu, Zhang Yi

2021-Jul-05

computed tomography, deep learning, image reconstruction, metal artifact reduction

Public Health Public Health

Common and specific determinants of 9-year depression and anxiety course-trajectories: A machine-learning investigation in the Netherlands Study of Depression and Anxiety (NESDA).

In Journal of affective disorders ; h5-index 79.0

BACKGROUND : Given the strong relationship between depression and anxiety, there is an urge to investigate their shared and specific long-term course determinants. The current study aimed to identify and compare the main determinants of the 9-year trajectories of combined and pure depression and anxiety symptom severity.

METHODS : Respondents with a 6-month depression and/or anxiety diagnosis (n=1,701) provided baseline data on 152 sociodemographic, clinical and biological variables. Depression and anxiety symptom severity assessed at baseline, 2-, 4-, 6- and 9-year follow-up, were used to identify data-driven course-trajectory subgroups for general psychological distress, pure depression, and pure anxiety severity scores. For each outcome (class-probability), a Superlearner (SL) algorithm identified an optimally weighted (minimum mean squared error) combination of machine-learning prediction algorithms. For each outcome, the top determinants in the SL were identified by determining variable-importance and correlations between each SL-predicted and observed outcome (ρpred) were calculated.

RESULTS : Low to high prediction correlations (ρpred: 0.41-0.91, median=0.73) were found. In the SL, important determinants of psychological distress were age, young age of onset, respiratory rate, participation disability, somatic disease, low income, minor depressive disorder and mastery score. For course of pure depression and anxiety symptom severity, similar determinants were found. Specific determinants of pure depression included several types of healthcare-use, and of pure-anxiety course included somatic arousal and psychological distress.

LIMITATIONS : Limited sample size for machine learning.

CONCLUSIONS : The determinants of depression- and anxiety-severity course are mostly shared. Domain-specific exceptions are healthcare use for depression and somatic arousal and distress for anxiety-severity course.

Wardenaar Klaas J, Riese Harriëtte, Giltay Erik J, Eikelenboom Merijn, van Hemert Albert J, Beekman Aartjan F, Penninx Brenda W J H, Schoevers Robert A

2021-Jun-24

Anxiety, Course, Depression, Machine Learning, Prediction, SuperLearner

General General

Parameter importance assessment improves efficacy of machine learning methods for predicting snow avalanche sites in Leh-Manali Highway, India.

In The Science of the total environment

Due to ongoing climate change, water mass redistribution and related hazards are getting stronger and frequent. Therefore, predicting extreme hydrological events and related hazards is one of the highest priorities in geosciences. Machine Learning (ML) methods have shown promising prospects in this venture. Every ML method requires training where we know both the output (extreme event) and input (relevant physical parameters and variables). This step is critical to the efficacy of the ML method. The usual approach is to include a wide variety of hydro-meteorological observations and physical parameters, but recent advances in ML indicate that the efficacy of ML may not improve by increasing the number of input parameters. In fact, including unimportant parameters decreases the efficacy of ML algorithms. Therefore, it is imperative that the most relevant parameters are identified prior to training. In this study, we demonstrate this concept by predicting avalanche susceptibility in Leh-Manali highway (one of the most severely affected regions in India) with and without Parameter Importance Assessment (PIA). The avalanche locations were randomly divided into two groups: 70% for training and 30% for testing. Then, based on temporal and spatial sensor data, eleven avalanche influencing parameters were considered. The Boruta algorithm, an extension of Random Forest (RF) ML method that utilizes the importance measure to rank predictors, was used and it found nine out of eleven parameters to be important. Support Vector Machine (SVM) based ML technique is used for avalanche prediction, and to be comprehensive, four different kernel functions were employed (linear, polynomial, sigmoid, and radial basis function (RBF)). The prediction accuracy for linear, polynomial, sigmoid, and RBF kernels, with all the eleven parameters were found to be 80.4%, 81.7%, 39.2%, and 85.7%, respectively. While, when using selected parameters, the prediction accuracy for linear, polynomial, sigmoid, and RBF kernels were 84.1%, 86.6%, 43.0%, and 87.8%, respectively. We also identified locations where occurrences of avalanches are most likely. We conclude that parameter selection should be considered when applying ML methods in geosciences.

Tiwari Anuj, G Arun, Vishwakarma Bramha Dutt

2021-Jun-29

Avalanche susceptibility modeling, Boruta algorithm, Machine learning (ML), Parameter Importance Assessment (PIA), Support Vector Machine (SVM)

General General

Deep learning assistance for tuberculosis diagnosis with chest radiography in low-resource settings.

In Journal of X-ray science and technology

Tuberculosis (TB) is a major health issue with high mortality rates worldwide. Recently, tremendous researches of artificial intelligence (AI) have been conducted targeting at TB to reduce the diagnostic burden. However, most researches are conducted in the developed urban areas. The feasibility of applying AI in low-resource settings remains unexplored. In this study, we apply an automated detection (AI) system to screen a large population in an underdeveloped area and evaluate feasibility and contribution of apply AI to help local radiologists detect and diagnose TB using chest X-ray (CXR) images. First, we divide image data into one training dataset including 2627 TB-positive cases and 7375 TB-negative cases and one testing dataset containing 276 TB-positive cases and 619 TB-negative cases, respectively. Next, in building AI system, the experiment includes image labeling and preprocessing, model training and testing. A segmentation model named TB-UNet is also built to detect diseased regions, which uses ResNeXt as the encoder of U-Net. We use AI-generated confidence score to predict the likelihood of each testing case being TB-positive. Then, we conduct two experiments to compare results between the AI system and radiologists with and without AI assistance. Study results show that AI system yields TB detection accuracy of 85%, which is much higher than detection accuracy of radiologists (62%) without AI assistance. In addition, with AI assistance, the TB diagnostic sensitivity of local radiologists is improved by 11.8%. Therefore, this study demonstrates that AI has great potential to help detection, prevention, and control of TB in low-resource settings, particularly in areas with scant doctors and higher rates of the infected population.

Nijiati Mayidili, Zhang Ziqi, Abulizi Abudoukeyoumujiang, Miao Hengyuan, Tuluhong Aikebaierjiang, Quan Shenwen, Guo Lin, Xu Tao, Zou Xiaoguang

2021-Jun-29

Artificial intelligence (AI), assistance, chest X-rays (CXRs), convolutional neural network, low-resource settings, radiologists, tuberculosis (TB) diagnosis

General General

Time series forecasting of new cases and new deaths rate for COVID-19 using deep learning methods.

In Results in physics

The first known case of Coronavirus disease 2019 (COVID-19) was identified in December 2019. It has spread worldwide, leading to an ongoing pandemic, imposed restrictions and costs to many countries. Predicting the number of new cases and deaths during this period can be a useful step in predicting the costs and facilities required in the future. The purpose of this study is to predict new cases and deaths rate one, three and seven-day ahead during the next 100 days. The motivation for predicting every n days (instead of just every day) is the investigation of the possibility of computational cost reduction and still achieving reasonable performance. Such a scenario may be encountered in real-time forecasting of time series. Six different deep learning methods are examined on the data adopted from the WHO website. Three methods are LSTM, Convolutional LSTM, and GRU. The bidirectional extension is then considered for each method to forecast the rate of new cases and new deaths in Australia and Iran countries. This study is novel as it carries out a comprehensive evaluation of the aforementioned three deep learning methods and their bidirectional extensions to perform prediction on COVID-19 new cases and new death rate time series. To the best of our knowledge, this is the first time that Bi-GRU and Bi-Conv-LSTM models are used for prediction on COVID-19 new cases and new deaths time series. The evaluation of the methods is presented in the form of graphs and Friedman statistical test. The results show that the bidirectional models have lower errors than other models. A several error evaluation metrics are presented to compare all models, and finally, the superiority of bidirectional methods is determined. This research could be useful for organisations working against COVID-19 and determining their long-term plans.

Ayoobi Nooshin, Sharifrazi Danial, Alizadehsani Roohallah, Shoeibi Afshin, Gorriz Juan M, Moosaei Hossein, Khosravi Abbas, Nahavandi Saeid, Gholamzadeh Chofreh Abdoulmohammad, Goni Feybi Ariani, Klemeš Jiří Jaromír, Mosavi Amir

2021-Aug

ANFIS, Adaptive Network-based Fuzzy Inference System, ANN, Artificial Neural Network, AU, Australia, Bi-Conv-LSTM, Bidirectional Convolutional Long Short Term Memory, Bi-GRU, Bidirectional Gated Recurrent Unit, Bi-LSTM, Bidirectional Long Short-Term Memory, Bidirectional, COVID-19 Prediction, COVID-19, Coronavirus Disease 2019, Conv-LSTM, Convolutional Long Short Term Memory, Convolutional Long Short Term Memory (Conv-LSTM), DL, Deep Learning, DLSTM, Delayed Long Short-Term Memory, Deep learning, EMRO, Eastern Mediterranean Regional Office, ES, Exponential Smoothing, EV, Explained Variance, GRU, Gated Recurrent Unit, Gated Recurrent Unit (GRU), IR, Iran, LR, Linear Regression, LSTM, Long Short-Term Memory, Lasso, Least Absolute Shrinkage and Selection Operator, Long Short Term Memory (LSTM), MAE, Mean Absolute Error, MAPE, Mean Absolute Percentage Error, MERS, Middle East Respiratory Syndrome, ML, Machine Learning, MLP-ICA, Multi-layered Perceptron-Imperialist Competitive Calculation, MSE, Mean Square Error, MSLE, Mean Squared Log Error, Machine learning, New Cases of COVID-19, New Deaths of COVID-19, PRISMA, Preferred Reporting Items for Precise Surveys and Meta-Analyses, RMSE, Root Mean Square Error, RMSLE, Root Mean Squared Log Error, RNN, Repetitive Neural Network, ReLU, Rectified Linear Unit, SARS, Serious Intense Respiratory Disorder, SARS-COV, SARS coronavirus, SARS-COV-2, Serious Intense Respiratory Disorder Coronavirus 2, SVM, Support Vector Machine, VAE, Variational Auto Encoder, WHO, World Health Organization, WPRO, Western Pacific Regional Office

Cardiology Cardiology

Detection and classification of arrhythmia using an explainable deep learning model.

In Journal of electrocardiology

BACKGROUND : Early detection and intervention is the cornerstone for appropriate treatment of arrhythmia and prevention of complications and mortality. Although diverse deep learning models have been developed to detect arrhythmia, they have been criticized due to their unexplainable nature. In this study, we developed an explainable deep learning model (XDM) to classify arrhythmia, and validated its performance using diverse external validation data.

METHODS : In this retrospective study, the Sejong dataset comprising 86,802 electrocardiograms (ECGs) was used to develop and internally variate the XDM. The XDM based on a neural network-backed ensemble tree was developed with six feature modules that are able to explain the reasons for its decisions. The model was externally validated using data from 36,961 ECGs from four non-restricted datasets.

RESULTS : During internal and external validation of the XDM, the average area under the receiver operating characteristic curves (AUCs) using a 12‑lead ECG for arrhythmia classification were 0.976 and 0.966, respectively. The XDM outperformed a previous simple multi-classification deep learning model that used the same method. During internal and external validation, the AUCs of explainability were 0.925-0.991.

CONCLUSION : Our XDM successfully classified arrhythmia using diverse formats of ECGs and could effectively describe the reason for the decisions. Therefore, an explainable deep learning methodology could improve accuracy compared to conventional deep learning methods, and that the transparency of XDM can be enhanced for its application in clinical practice.

Jo Yong-Yeon, Kwon Joon-Myoung, Jeon Ki-Hyun, Cho Yong-Hyeon, Shin Jae-Hyun, Lee Yoon-Ji, Jung Min-Seung, Ban Jang-Hyeon, Kim Kyung-Hee, Lee Soo Youn, Park Jinsik, Oh Byung-Hee

2021-Jun-26

Arrhythmia, Artificial intelligence, Deep learning, Electrocardiography