Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Automated COVID-19 Grading With Convolutional Neural Networks in Computed Tomography Scans: A Systematic Comparison.

In IEEE transactions on artificial intelligence

Amidst the ongoing pandemic, the assessment of computed tomography (CT) images for COVID-19 presence can exceed the workload capacity of radiologists. Several studies addressed this issue by automating COVID-19 classification and grading from CT images with convolutional neural networks (CNNs). Many of these studies reported initial results of algorithms that were assembled from commonly used components. However, the choice of the components of these algorithms was often pragmatic rather than systematic and systems were not compared to each other across papers in a fair manner. We systematically investigated the effectiveness of using 3-D CNNs instead of 2-D CNNs for seven commonly used architectures, including DenseNet, Inception, and ResNet variants. For the architecture that performed best, we furthermore investigated the effect of initializing the network with pretrained weights, providing automatically computed lesion maps as additional network input, and predicting a continuous instead of a categorical output. A 3-D DenseNet-201 with these components achieved an area under the receiver operating characteristic curve of 0.930 on our test set of 105 CT scans and an AUC of 0.919 on a publicly available set of 742 CT scans, a substantial improvement in comparison with a previously published 2-D CNN. This article provides insights into the performance benefits of various components for COVID-19 classification and grading systems. We have created a challenge on to allow for a fair comparison between the results of this and future research.

de Vente Coen, Boulogne Luuk H, Venkadesh Kiran Vaidhya, Sital Cheryl, Lessmann Nikolas, Jacobs Colin, Sanchez Clara I, van Ginneken Bram


** CO-RADS, 3-D convolutional neural network (CNN), COVID-19, deep learning, medical imaging**

Public Health Public Health

Contribution of Deep-Learning Techniques Toward Fighting COVID-19: A Bibliometric Analysis of Scholarly Production During 2020.

In IEEE access : practical innovations, open solutions

COVID-19 has dramatically affected various aspects of human society with worldwide repercussions. Firstly, a serious public health issue has been generated, resulting in millions of deaths. Also, the global economy, social coexistence, psychological status, mental health, and the human-environment relationship/dynamics have been seriously affected. Indeed, abrupt changes in our daily lives have been enforced, starting with a mandatory quarantine and the application of biosafety measures. Due to the magnitude of these effects, research efforts from different fields were rapidly concentrated around the current pandemic to mitigate its impact. Among these fields, Artificial Intelligence (AI) and Deep Learning (DL) have supported many research papers to help combat COVID-19. The present work addresses a bibliometric analysis of this scholarly production during 2020. Specifically, we analyse quantitative and qualitative indicators that give us insights into the factors that have allowed papers to reach a significant impact on traditional metrics and alternative ones registered in social networks, digital mainstream media, and public policy documents. In this regard, we study the correlations between these different metrics and attributes. Finally, we analyze how the last DL advances have been exploited in the context of the COVID-19 situation.

Chicaiza Janneth, Villota Stephany D, Vinueza-Naranjo Paola G, Rumipamba-Zambrano Ruben


Bibliometric analysis, COVID-19, deep learning, scholarly production

General General

Disinformation detection on social media: An integrated approach.

In Multimedia tools and applications

The emergence of social media platforms has amplified the dissemination of false information in various forms. Social media gives rise to virtual societies by providing freedom of expression to users in a democracy. Due to the presence of echo chambers on social media, social science studies play a vital role in the spread of false news. To this aim, we provide a comprehensive framework that is adapted from several scholarly studies. The framework is capable of detecting information into various types, namely real, disinformation and satire based on authenticity as well as intention. The process highlights the use of interdisciplinary approaches derived from fundamental theories of social science and integrating them with modern computational tools and techniques. Few of these theories claim that malicious users suggest writing fabricated content in a different style to attract the audience. Style-based methods evaluate the intention i.e., the content is written with an intent to mislead the audience or not. However, the writing style can be deceptive. Thus, it is important to involve user-oriented social information to improve model strength. Therefore, the paper used an integrated approach by combining style based and propagation-based features with a total of thirty-one features. The extracted features are divided into ten categories: relative frequency, quantity, complexity, uncertainty, sentiment, subjectivity, diversity, informality, additional, and popularity. The features have been iteratively utilized by supervised classifiers and then selected the best-correlated ones using the ANOVA test. Our experimental results have shown that the selected features are able to distinguish real from disinformation and satirical news. It has been observed that the Ensemble machine learning model outperformed other models over the developed multi-labelled corpus.

Rastogi Shubhangi, Bansal Divya


Covid-19, Disinformation, Ensemble, Fake, Machine learning, Neural network, Satire

General General

Crowd dynamics research in the era of Covid-19 pandemic: Challenges and opportunities.

In Safety science

With the issues of crowd control and physical distancing becoming central to disease prevention measures, one would expect that crowd research should become a focus of attention during the Covid-19 pandemic era. However, I will show, based on a variety of metrics, that not only has this not been the case, but also, the first two years of the pandemic have posed an undisputable setback to the development and growth of crowd science. Without intervention, this could potentially aggravate further and cause a long-lasting recession in this field. This article, in addition to documenting and highlighting this issue, aims to outline potential avenues through which crowd research can reshape itself in the era of Covid-19 pandemic, maintain its pre-pandemic momentum and even further expand the diversity of its topics. Despite significant changes that the pandemic has brought to human life, issues related to congregation and mobility of pedestrians, building fires, crowd incidents, rallying crowds and the like have not disappeared from societies and remain relevant. Moreover, the diversity of pandemic-related problems itself creates a rich ground for making novel scientific discoveries. This could provide grounds for establishing fresh dimensions in crowd dynamics research. These potential new dimensions extend to all areas of this field including numerical and experimental investigations, crowd psychology and applications of computer vision and artificial intelligence methods in crowd management. The Covid-19 pandemic may have posed challenges to crowd researchers but has also created ample potential opportunities. This is further evidenced by reviewing efforts taken thus far in pandemic-related crowd research.

Haghani Milad


Covid-19, Crowd dynamics, Evacuation dynamics, Pandemic, Pedestrian dynamics

General General

Simple Regularisation for Uncertainty-Aware Knowledge Distillation

ArXiv Preprint

Considering uncertainty estimation of modern neural networks (NNs) is one of the most important steps towards deploying machine learning systems to meaningful real-world applications such as in medicine, finance or autonomous systems. At the moment, ensembles of different NNs constitute the state-of-the-art in both accuracy and uncertainty estimation in different tasks. However, ensembles of NNs are unpractical under real-world constraints, since their computation and memory consumption scale linearly with the size of the ensemble, which increase their latency and deployment cost. In this work, we examine a simple regularisation approach for distribution-free knowledge distillation of ensemble of machine learning models into a single NN. The aim of the regularisation is to preserve the diversity, accuracy and uncertainty estimation characteristics of the original ensemble without any intricacies, such as fine-tuning. We demonstrate the generality of the approach on combinations of toy data, SVHN/CIFAR-10, simple to complex NN architectures and different tasks.

Martin Ferianc, Miguel Rodrigues


Pathology Pathology

Explainable Biomarkers for Automated Glomerular and Patient-Level Disease Classification.

In Kidney360

Pathologists use multiple microscopy modalities to assess renal biopsy specimens. Besides usual diagnostic features, some changes are too subtle to be properly defined. Computational approaches have the potential to systematically quantitate subvisual clues, provide pathogenetic insight, and link to clinical outcomes. To this end, a proof-of-principle study is presented demonstrating that explainable biomarkers through machine learning can distinguish between glomerular disorders at the light-microscopy level. The proposed system used image analysis techniques and extracted 233 explainable biomarkers related to color, morphology, and microstructural texture. Traditional machine learning was then used to classify minimal change disease (MCD), membranous nephropathy (MN), and thin basement membrane nephropathy (TBMN) diseases on a glomerular and patient-level basis. The final model combined the Gini feature importance set and linear discriminant analysis classifier. Six morphologic (nuclei-to-glomerular tuft area, nuclei-to-glomerular area, glomerular tuft thickness greater than ten, glomerular tuft thickness greater than three, total glomerular tuft thickness, and glomerular circularity) and four microstructural texture features (luminal contrast using wavelets, nuclei energy using wavelets, nuclei variance using color vector LBP, and glomerular correlation using GLCM) were, together, the best performing biomarkers. Accuracies of 77% and 87% were obtained for glomerular and patient-level classification, respectively. Computational methods, using explainable glomerular biomarkers, have diagnostic value and are compatible with our existing knowledge of disease pathogenesis. Furthermore, this algorithm can be applied to clinical datasets for novel prognostic and mechanistic biomarker discovery.

Basso Matthew Nicholas, Barua Moumita, John Rohan, Khademi April


basic science, computational pathology, explainable biomarkers, glomerular and tubulointerstitial diseases, machine learning, membranous nephropathy, minimal change disease, thin-basement membrane nephropathy