Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Surgery Surgery

A Novel Scoring System to Predict Length of Stay After Anterior Cervical Discectomy and Fusion.

In The Journal of the American Academy of Orthopaedic Surgeons

INTRODUCTION : The movement toward reducing healthcare expenditures has led to an increased volume of outpatient anterior cervical diskectomy and fusions (ACDFs). Appropriateness for outpatient surgery can be gauged based on the duration of recovery each patient will likely need.

METHODS : Patients undergoing 1- or 2-level ACDFs were retrospectively identified at a single Level I spine surgery referral institution. Length of stay (LOS) was categorized binarily as either less than two midnights or two or more midnights. The data were split into training (80%) and test (20%) sets. Two multivariate regressions and three machine learning models were developed to predict a probability of LOS ≥ 2 based on preoperative patient characteristics. Using each model, coefficients were computed for each risk factor based on the training data set and used to create a calculatable ACDF Predictive Scoring System (APSS). Performance of each APSS was then evaluated on a subsample of the data set withheld from training. Decision curve analysis was done to evaluate benefit across probability thresholds for the best performing model.

RESULTS : In the final analysis, 1,516 patients had a LOS <2 and 643 had a LOS ≥2. Patient characteristics used for predictive modeling were American Society of Anesthesiologists score, age, body mass index, sex, procedure type, history of chronic pulmonary disease, depression, diabetes, hypertension, and hypothyroidism. The best performing APSS was modeled after a lasso regression. When applied to the withheld test data set, the APSS-lasso had an area under the curve from the receiver operating characteristic curve of 0.68, with a specificity of 0.78 and a sensitivity of 0.49. The calculated APSS scores ranged between 0 and 45 and corresponded to a probability of LOS ≥2 between 4% and 97%.

CONCLUSION : Using classic statistics and machine learning, this scoring system provides a platform for stratifying patients undergoing ACDF into an inpatient or outpatient surgical setting.

Russo Glenn S, Canseco Jose A, Chang Michael, Levy Hannah A, Nicholson Kristen, Karamian Brian A, Mangan John, Fang Taolin, Vaccaro Alexander R, Kepler Christopher K

2021-Jan-07

General General

Convolutional Neural Networks for Semantic Segmentation as a Tool for Multiclass Face Analysis in Thermal Infrared.

In Journal of nondestructive evaluation

Convolutional neural networks were used for multiclass segmentation in thermal infrared face analysis. The principle is based on existing image-to-image translation approaches, where each pixel in an image is assigned to a class label. We show that established networks architectures can be trained for the task of multiclass face analysis in thermal infrared. Created class annotations consisted of pixel-accurate locations of different face classes. Subsequently, the trained network can segment an acquired unknown infrared face image into the defined classes. Furthermore, face classification in live image acquisition is shown, in order to be able to display the relative temperature in real-time from the learned areas. This allows a pixel-accurate temperature face analysis e.g. for infection detection like Covid-19. At the same time our approach offers the advantage of concentrating on the relevant areas of the face. Areas of the face irrelevant for the relative temperature calculation or accessories such as glasses, masks and jewelry are not considered. A custom database was created to train the network. The results were quantitatively evaluated with the intersection over union (IoU) metric. The methodology shown can be transferred to similar problems for more quantitative thermography tasks like in materials characterization or quality control in production.

Müller David, Ehlen Andreas, Valeske Bernd

2021

Artificial intelligence, Health monitoring, Intelligent sensors, Machine learning, Thermography

General General

Sociological modeling of smart city with the implementation of UN sustainable development goals.

In Sustainability science

** : The COVID-19 pandemic before mass vaccination can be restrained only by the limitation of contacts between people, which makes the digital economy a key condition for survival. More than half of the world's population lives in urban areas, and many cities have already transformed into "smart" digital/virtual hubs. Digital services ensure city life safe without an economy lockout and unemployment. Urban society strives to be safe, sustainable, well-being, and healthy. We set the task to construct a hybrid sociological and technological concept of a smart city with matched solutions, complementary to each other. Our modeling with the elaborated digital architectures and with the bionic solution for ensuring sufficient data governance showed that a smart city in comparison with the traditional city is tightly interconnected inside like a social "organism". Society has entered a decisive decade during which the world will change by moving closer towards SDGs targets 2030 as well as by the transformation of cities and their digital infrastructures. It is important to recognize the large vector of sociological transformation as smart cities are just a transition phase to human-centered personal space or smart home. The "atomization" of the world urban population raises the gap problem in achieving SDGs because of different approaches to constructing digital architectures for smart cities or smart homes in countries. The strategy of creating smart cities should bring each citizen closer to SDGs at the individual level, laying in the personal space the principles of sustainable development and wellness of personality.

Supplementary Information : The online version contains supplementary material available at 10.1007/s11625-020-00889-5.

Kolesnichenko Olga, Mazelis Lev, Sotnik Alexander, Yakovleva Dariya, Amelkin Sergey, Grigorevsky Ivan, Kolesnichenko Yuriy

2021-Jan-03

API-sociology, Community wellness, Logical artificial intelligence, Smart and healthy city, Sociology of smart city, Sustainable development goals

oncology Oncology

Machine Learning in Liver Transplantation: a tool for some unsolved questions?

In Transplant international : official journal of the European Society for Organ Transplantation

Machine learning has recently been proposed as a useful tool in many fields of Medicine, with the aim of increasing diagnostic and prognostic accuracy. Models based on machine learning have been introduced in the setting of solid organ transplantation too, where prognosis depends on a complex, multidimensional and non-linear relationship between variables pertaining to the donor, the recipient and the surgical procedure. In the setting of liver transplantation, machine learning models have been developed to predict pre-transplant survival in patients with cirrhosis, to assess the best donor-to-recipient match during allocation processes, and to foresee postoperative complications and outcomes. This is a narrative review on the role of machine learning in the field of liver transplantation, highlighting strengths and pitfalls, and future perspectives.

Ferrarese Alberto, Sartori Giuseppe, Orrù Graziella, Frigo Anna Chiara, Pelizzaro Filippo, Burra Patrizia, Senzolo Marco

2021-Jan-11

acute liver failure, cirrhosis, liver transplantation, machine learning, neural network

General General

Synthetic CT image generation of shape-controlled lung cancer using semi-conditional InfoGAN and its applicability for type classification.

In International journal of computer assisted radiology and surgery

PURPOSE : In recent years, convolutional neural network (CNN), an artificial intelligence technology with superior image recognition, has become increasingly popular and frequently used for classification tasks in medical imaging. However, the amount of labelled data available for classifying medical images is often significantly less than that of natural images, and the handling of rare diseases is often challenging. To overcome these problems, data augmentation has been performed using generative adversarial networks (GANs). However, conventional GAN cannot effectively handle the various shapes of tumours because it randomly generates images. In this study, we introduced semi-conditional InfoGAN, which enables some labels to be added to InfoGAN, for the generation of shape-controlled tumour images. InfoGAN is a derived model of GAN, and it can represent object features in images without any label.

METHODS : Chest computed tomography images of 66 patients diagnosed with three histological types of lung cancer (adenocarcinoma, squamous cell carcinoma, and small cell lung cancer) were used for analysis. To investigate the applicability of the generated images, we classified the histological types of lung cancer using a CNN that was pre-trained with the generated images.

RESULTS : As a result of the training, InfoGAN was possible to generate images that controlled the diameters of each lesion and the presence or absence of the chest wall. The classification accuracy of the pre-trained CNN was 57.7%, which was higher than that of the CNN trained only with real images (34.2%), thereby suggesting the potential of image generation.

CONCLUSION : The applicability of semi-conditional InfoGAN for feature learning and representation in medical images was demonstrated in this study. InfoGAN can perform constant feature learning and generate images with a variety of shapes using a small dataset.

Toda Ryo, Teramoto Atsushi, Tsujimoto Masakazu, Toyama Hiroshi, Imaizumi Kazuyoshi, Saito Kuniaki, Fujita Hiroshi

2021-Jan-11

CNN, CT imaging, Classification, GAN, Image synthesis, Lung cancer

General General

Does co-presence affect the way we perceive and respond to emotional interactions?

In Experimental brain research

This study compared how two virtual display conditions of human body expressions influenced explicit and implicit dimensions of emotion perception and response behavior in women and men. Two avatars displayed emotional interactions (angry, sad, affectionate, happy) in a "pictorial" condition depicting the emotional interactive partners on a screen within a virtual environment and a "visual" condition allowing participants to share space with the avatars, thereby enhancing co-presence and agency. Subsequently to stimulus presentation, explicit valence perception and response tendency (i.e. the explicit tendency to avoid or approach the situation) were assessed on rating scales. Implicit responses, i.e. postural and autonomic responses towards the observed interactions were measured by means of postural displacement and changes in skin conductance. Results showed that self-reported presence differed between pictorial and visual conditions, however, it was not correlated with skin conductance responses. Valence perception was only marginally influenced by the virtual condition and not at all by explicit response behavior. There were gender-mediated effects on postural response tendencies as well as gender differences in explicit response behavior but not in valence perception. Exploratory analyses revealed a link between valence perception and preferred behavioral response in women but not in men. We conclude that the display condition seems to influence automatic motivational tendencies but not higher level cognitive evaluations. Moreover, intragroup differences in explicit and implicit response behavior highlight the importance of individual factors beyond gender.

Bachmann Julia, Zabicki Adam, Gradl Stefan, Kurz Johannes, Munzert Jörn, Troje Nikolaus F, Krueger Britta

2021-Jan-11

Co-presence, Emotion perception, Explicit response behavior, Gender differences, Implicit response behavior