Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

CGNet: A graph-knowledge embedded convolutional neural network for detection of pneumonia.

In Information processing & management

Pneumonia is a global disease that causes high children mortality. The situation has even been worsening by the outbreak of the new coronavirus named COVID-19, which has killed more than 983,907 so far. People infected by the virus would show symptoms like fever and coughing as well as pneumonia as the infection progresses. Timely detection is a public consensus achieved that would benefit possible treatments and therefore contain the spread of COVID-19. X-ray, an expedient imaging technique, has been widely used for the detection of pneumonia caused by COVID-19 and some other virus. To facilitate the process of diagnosis of pneumonia, we developed a deep learning framework for a binary classification task that classifies chest X-ray images into normal and pneumonia based on our proposed CGNet. In our CGNet, there are three components including feature extraction, graph-based feature reconstruction and classification. We first use the transfer learning technique to train the state-of-the-art convolutional neural networks (CNNs) for binary classification while the trained CNNs are used to produce features for the following two components. Then, by deploying graph-based feature reconstruction, we, therefore, combine features through the graph to reconstruct features. Finally, a shallow neural network named GNet, a one layer graph neural network, which takes the combined features as the input, classifies chest X-ray images into normal and pneumonia. Our model achieved the best accuracy at 0.9872, sensitivity at 1 and specificity at 0.9795 on a public pneumonia dataset that includes 5,856 chest X-ray images. To evaluate the performance of our proposed method on detection of pneumonia caused by COVID-19, we also tested the proposed method on a public COVID-19 CT dataset, where we achieved the highest performance at the accuracy of 0.99, specificity at 1 and sensitivity at 0.98, respectively.

Yu Xiang, Wang Shui-Hua, Zhang Yu-Dong


COVID-19, Chest X-ray images, Feature reconstruction, Graph, Transfer learning

General General

Estimation of laryngeal closure duration during swallowing without invasive X-rays.

In Future generations computer systems : FGCS

Laryngeal vestibule (LV) closure is a critical physiologic event during swallowing, since it is the first line of defense against food bolus entering the airway. Identifying the laryngeal vestibule status, including closure, reopening and closure duration, provides indispensable references for assessing the risk of dysphagia and neuromuscular function. However, commonly used radiographic examinations, known as videofluoroscopy swallowing studies, are highly constrained by their radiation exposure and cost. Here, we introduce a non-invasive sensor-based system, that acquires high-resolution cervical auscultation signals from neck and accommodates advanced deep learning techniques for the detection of LV behaviors. The deep learning algorithm, which combined convolutional and recurrent neural networks, was developed with a dataset of 588 swallows from 120 patients with suspected dysphagia and further clinically tested on 45 samples from 16 healthy participants. For classifying the LV closure and opening statuses, our method achieved 78.94% and 74.89% accuracies for these two datasets, suggesting the feasibility of implementing sensor signals for LV prediction without traditional videofluoroscopy screening methods. The sensor supported system offers a broadly applicable computational approach for clinical diagnosis and biofeedback purposes in patients with swallowing disorders without the use of radiographic examination.

Mao Shitong, Sabry Aliaa, Khalifa Yassin, Coyle James L, Sejdic Ervin


Deep learning, Dysphagia, Health-care, High resolution cervical auscultation (HRCA), Laryngeal vestibule closure

General General

Using artificial intelligence to overcome over-indebtedness and fight poverty.

In Journal of business research

This research examines how artificial intelligence may contribute to better understanding and to overcome over-indebtedness in contexts of high poverty risk. This research uses Automated Machine Learning (AutoML) in a field database of 1654 over-indebted households to identify distinguishable clusters and to predict its risk factors. First, unsupervised machine learning using Self-Organizing Maps generated three over-indebtedness clusters: low-income (31.27%), low credit control (37.40%), and crisis-affected households (31.33%). Second, supervised machine learning with exhaustive grid search hyperparameters (32,730 predictive models) suggests that Nu-Support Vector Machine had the best accuracy in predicting families' over-indebtedness risk factors (89.5%). By proposing an AutoML approach on over-indebtedness, our research adds both theoretically and methodologically to current models of scarcity with important practical implications for business research and society. Our findings also contribute to novel ways to identify and characterize poverty risk in earlier stages, allowing customized interventions for different profiles of over-indebtedness.

Boto Ferreira Mário, Costa Pinto Diego, Maurer Herter Márcia, Soro Jerônimo, Vanneschi Leonardo, Castelli Mauro, Peres Fernando


Artificial intelligence, Automated machine learning, Credit control, Economic austerity, Over-indebtedness, Poverty risk

General General

COSMO-RS-Based Descriptors for the Machine Learning-Enabled Screening of Nucleotide Analogue Drugs against SARS-CoV-2.

In The journal of physical chemistry letters ; h5-index 129.0

Chemical similarity-based approaches employed to repurpose or develop new treatments for emerging diseases, such as COVID-19, correlates molecular structure-based descriptors of drugs with those of a physiological counterpart or clinical phenotype. We propose novel descriptors based on a COSMO-RS (short for conductor-like screening model for real solvents) σ-profiles for enhanced drug screening enabled by machine learning (ML). The descriptors' performance is hereby illustrated for nucleotide analogue drugs that inhibit the ribonucleic acid-dependent ribonucleic acid polymerase, key to viral transcription and genome replication. The COSMO-RS-based descriptors account for both chemical reactivity and structure, and are more effective for ML-based screening than fingerprints based on molecular structure and simple physical/chemical properties. The descriptors are evaluated using principal component analysis, an unsupervised ML technique. Our results correlate with the active monophosphate forms of the leading drug remdesivir and the prospective drug EIDD-2801 with nucleotides, followed by other promising drugs, and are superior to those from molecular structure-based descriptors and molecular docking. The COSMO-RS-based descriptors could help accelerate drug discovery for the treatment of emerging diseases.

Gusarov Sergey, Stoyanov Stanislav R


General General

Obstetric ultrasound: where are we and where are we going?

In Ultrasonography (Seoul, Korea)

Diagnostic ultrasound (DUS) is, arguably, the most common technique used in obstetrical practice. From A mode, first described by Ian Donald for gynecology in the late 1950s, to B mode in the 1970s, real-time and gray-scale in the early 1980s, Doppler a little later, sophisticated color Doppler in the 1990s and three dimensional/four-dimensional ultrasound in the 2000s, DUS has not ceased to be closely associated with the practice of obstetrics. The latest innovation is the use of artificial intelligence which will, undoubtedly, take an increasing role in all aspects of our lives, including medicine and, specifically, obstetric ultrasound. In addition, in the future, new visualization methods may be developed, training methods expanded, and workflow and ergonomics improved.

Abramowicz Jacques S


3-D, 4-D, Artificial intelligence, Doppler, Obstetrics, training, ultrasound

General General

Potential of Augmented Reality Platforms to Improve Individual Hearing Aids and to Support More Ecologically Valid Research.

In Ear and hearing ; h5-index 39.0

An augmented reality (AR) platform combines several technologies in a system that can render individual "digital objects" that can be manipulated for a given purpose. In the audio domain, these may, for example, be generated by speaker separation, noise suppression, and signal enhancement. Access to the "digital objects" could be used to augment auditory objects that the user wants to hear better. Such AR platforms in conjunction with traditional hearing aids may contribute to closing the gap for people with hearing loss through multimodal sensor integration, leveraging extensive current artificial intelligence research, and machine-learning frameworks. This could take the form of an attention-driven signal enhancement and noise suppression platform, together with context awareness, which would improve the interpersonal communication experience in complex real-life situations. In that sense, an AR platform could serve as a frontend to current and future hearing solutions. The AR device would enhance the signals to be attended, but the hearing amplification would still be handled by hearing aids. In this article, suggestions are made about why AR platforms may offer ideal affordances to compensate for hearing loss, and how research-focused AR platforms could help toward better understanding of the role of hearing in everyday life.

Mehra Ravish, Brimijoin Owen, Robinson Philip, Lunner Thomas