Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

Public Health Public Health

Liquid Health. Medicine in the age of surveillance capitalism.

In Social science & medicine (1982)

Digital health technologies transform practices, roles, and relationships in medicine. New possibilities for a ubiquitous and constant data collection and the processing of data in real-time enable more personalized health services. These technologies might also allow users to actively participate in health practices, thus potentially changing the role of patients from passive receivers of healthcare to active agents. The crucial driving force of this transformation is the implementation of data-intensive surveillance and monitoring as well as self-monitoring technologies. Some commentators use terms like revolution, democratization, and empowerment to describe the aforementioned transformation process in medicine. The public debate as well as most of the ethical discourse on digital health tends to focus on the technologies themselves, mostly ignoring the economic framework of their design and implementation. Analyzing the transformation process connected to digital health technologies needs an epistemic lens that also considers said economic framework, which I argue is surveillance capitalism. This paper introduces the concept of liquid health as such an epistemic lens. Liquid health is based on Zygmunt Bauman's framing of modernity as a process of liquefaction that dissolves traditional norms and standards, roles, and relations. By using liquid health as an epistemic lens, I aim to show how digital health technologies reshape concepts of health and illness, change the scope of the medical domain, and liquify roles and relationships that surround health and healthcare. The basic hypothesis is that although digital health technologies can lead to personalization of treatment and empowerment of users, their economic framework of surveillance capitalism may undermine these very goals. Using liquid health as a concept allows us to better understand and describe practices of health and healthcare that are shaped by digital technologies and the specific economic practices they are inseparably attached to.

Rubeis Giovanni

2023-Feb-25

Artificial intelligence, Big data, Digital health, Ethics, Surveillance capitalism, Zygmunt bauman

General General

Convolutional neural network classifies visual stimuli from cortical response recorded with wide-field imaging in mice.

In Journal of neural engineering ; h5-index 52.0

ObjectiveThe optic nerve is a good location for a visual neuroprosthesis. It can be targeted when a subject cannot receive a retinal prosthesis and it is less invasive than a cortical implant. The effectiveness of an electrical neuroprosthesis depends on the combination of the stimulation parameters which must be optimized, and an optimization strategy might be performing closed-loop stimulation using the evoked cortical response as feedback. However, it is necessary to identify target cortical activation patterns and to associate the cortical activity with the visual stimuli present in the visual field of the subjects. Visual stimuli decoding should be performed on large areas of the visual cortex, and with a method as translational as possible to shift the study to human subjects in the future. The aim of this work is to develop an algorithm that meets these requirements and can be leveraged to automatically associate a cortical activation pattern with the visual stimulus that generated it.ApproachThree mice were presented with 10 different visual stimuli, and their primary visual cortex response was recorded using wide-field calcium imaging. Our decoding algorithm relies on a convolutional neural network (CNN), trained to classify the visual stimuli from the correspondent wide-field images. Several experiments were performed to identify the best training strategy and investigate the possibility of generalization.Main resultsThe best classification accuracy was 75.38%±4.77%, obtained pre-training the CNN on the MNIST digits dataset and fine-tuning it on our dataset. Generalization was possible pre-training the CNN to classify Mouse 1 dataset and fine-tuning it on Mouse 2 and Mouse 3, with accuracies of 64.14%±10.81% and 51.53%±6.48% respectively.SignificanceThe combination of wide-field calcium imaging and CNNs can be used to classify the cortical responses to simple visual stimuli and might be a viable alternative to existing decoding methodologies. It also allows us to consider the cortical activation as reliable feedback in future optic nerve stimulation experiments.

De Luca Daniela, Moccia Sara, Lupori Leonardo, Mazziotti Raffaele, Pizzorusso Tommaso, Micera Silvestro

2023-Mar-09

deep learning, optic nerve, visual prostheses, visual stimuli decoding, wide-field imaging

General General

High-Density Guide RNA Tiling and Machine Learning for Designing CRISPR Interference in Synechococcus sp. PCC 7002.

In ACS synthetic biology

While CRISPRi was previously established in Synechococcus sp. PCC 7002 (hereafter 7002), the design principles for guide RNA (gRNA) effectiveness remain largely unknown. Here, 76 strains of 7002 were constructed with gRNAs targeting three reporter systems to evaluate features that impact gRNA efficiency. Correlation analysis of the data revealed that important features of gRNA design include the position relative to the start codon, GC content, protospacer adjacent motif (PAM) site, minimum free energy, and targeted DNA strand. Unexpectedly, some gRNAs targeting upstream of the promoter region showed small but significant increases in reporter expression, and gRNAs targeting the terminator region showed greater repression than gRNAs targeting the 3' end of the coding sequence. Machine learning algorithms enabled prediction of gRNA effectiveness, with Random Forest having the best performance across all training sets. This study demonstrates that high-density gRNA data and machine learning can improve gRNA design for tuning gene expression in 7002.

Dallo Tessa, Krishnakumar Raga, Kolker Stephanie D, Ruffing Anne M

2023-Mar-09

CRISPRi, Synechococcus, Synechococcus sp. PCC 7002, cyanobacteria, gRNA design, machine learning

General General

Using Explainable Artificial Intelligence to Predict Potentially Preventable Hospitalizations: A Population-Based Cohort Study in Denmark.

In Medical care

BACKGROUND : The increasing aging population and limited health care resources have placed new demands on the healthcare sector. Reducing the number of hospitalizations has become a political priority in many countries, and special focus has been directed at potentially preventable hospitalizations.

OBJECTIVES : We aimed to develop an artificial intelligence (AI) prediction model for potentially preventable hospitalizations in the coming year, and to apply explainable AI to identify predictors of hospitalization and their interaction.

METHODS : We used the Danish CROSS-TRACKS cohort and included citizens in 2016-2017. We predicted potentially preventable hospitalizations within the following year using the citizens' sociodemographic characteristics, clinical characteristics, and health care utilization as predictors. Extreme gradient boosting was used to predict potentially preventable hospitalizations with Shapley additive explanations values serving to explain the impact of each predictor. We reported the area under the receiver operating characteristic curve, the area under the precision-recall curve, and 95% confidence intervals (CI) based on five-fold cross-validation.

RESULTS : The best performing prediction model showed an area under the receiver operating characteristic curve of 0.789 (CI: 0.782-0.795) and an area under the precision-recall curve of 0.232 (CI: 0.219-0.246). The predictors with the highest impact on the prediction model were age, prescription drugs for obstructive airway diseases, antibiotics, and use of municipality services. We found an interaction between age and use of municipality services, suggesting that citizens aged 75+ years receiving municipality services had a lower risk of potentially preventable hospitalization.

CONCLUSION : AI is suitable for predicting potentially preventable hospitalizations. The municipality-based health services seem to have a preventive effect on potentially preventable hospitalizations.

Riis Anders Hammerich, Kristensen Pia Kjær, Lauritsen Simon Meyer, Thiesson Bo, Jørgensen Marianne Johansson

2023-Apr-01

Public Health Public Health

Ethics Principles for Artificial Intelligence-Based Telemedicine for Public Health.

In American journal of public health ; h5-index 90.0

The use of artificial intelligence (AI) in the field of telemedicine has grown exponentially over the past decade, along with the adoption of AI-based telemedicine to support public health systems. Although AI-based telemedicine can open up novel opportunities for the delivery of clinical health and care and become a strong aid to public health systems worldwide, it also comes with ethical risks that should be detected, prevented, or mitigated for the responsible use of AI-based telemedicine in and for public health. However, despite the current proliferation of AI ethics frameworks, thus far, none have been developed for the design of AI-based telemedicine, especially for the adoption of AI-based telemedicine in and for public health. We aimed to fill this gap by mapping the most relevant AI ethics principles for AI-based telemedicine for public health and by showing the need to revise them via major ethical themes emerging from bioethics, medical ethics, and public health ethics toward the definition of a unified set of 6 AI ethics principles for the implementation of AI-based telemedicine. (Am J Public Health. Published online ahead of print March 9, 2023:e1-e8. https://doi.org/10.2105/AJPH.2022.307225).

Tiribelli Simona, Monnot Annabelle, Shah Syed F H, Arora Anmol, Toong Ping J, Kong Sokanha

2023-Mar-09

General General

Using Decomposed Error for Reproducing Implicit Understanding of Algorithms.

In Evolutionary computation

Reproducibility is important for having confidence in evolutionary machine learning algorithms. Although the focus of reproducibility is usually to recreate an aggregate prediction error score using fixed random seeds, this is not sufficient. Firstly, multiple runs of an algorithm, without a fixed random seed, should ideally return statistically equivalent results. Secondly, it should be confirmed whether the expected behaviour of an algorithm matches its actual behaviour, in terms of how an algorithm targets a reduction in prediction error. Confirming the behaviour of an algorithm is not possible when using a total error aggregate score. Using an error decomposition framework as a methodology for improving the reproducibility of results in evolutionary computation addresses both of these factors. By estimating decomposed error using multiple runs of an algorithm and multiple training sets, the framework provides a greater degree of certainty about the prediction error. Also, decomposing error into bias, variance due to the algorithm (internal variance) and variance due to the training data (external variance) more fully characterises evolutionary algorithms. This allows the behaviour of an algorithm to be confirmed. Applying the framework to a number of evolutionary algorithms shows that their expected behaviour can be different to their actual behaviour. Identifying a behaviour mismatch is important in terms of understanding how to further refine an algorithm as well as how to effectively apply an algorithm to a problem.

Owen Caitlin A, Dick Grant, Whigham Peter A

2023-Mar-09

Genetic programming, biasvariance trade-off, ensemble learning, error decomposition, evolutionary machine learning, geometric semantic genetic programming, stochastic algorithms, symbolic regression