Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Story Arcs in Serious Illness: Natural Language Processing features of Palliative Care Conversations.

In Patient education and counseling ; h5-index 0.0

OBJECTIVE : Serious illness conversations are complex clinical narratives that remain poorly understood. Natural Language Processing (NLP) offers new approaches for identifying hidden patterns within the lexicon of stories that may reveal insights about the taxonomy of serious illness conversations.

METHODS : We analyzed verbatim transcripts from 354 consultations involving 231 patients and 45 palliative care clinicians from the Palliative Care Communication Research Initiative. We stratified each conversation into deciles of "narrative time" based on word counts. We used standard NLP analyses to examine the frequency and distribution of words and phrases indicating temporal reference, illness terminology, sentiment and modal verbs (indicating possibility/desirability).

RESULTS : Temporal references shifted steadily from talking about the past to talking about the future over deciles of narrative time. Conversations progressed incrementally from "sadder" to "happier" lexicon; reduction in illness terminology accounted substantially for this pattern. We observed the following sequence in peak frequency over narrative time: symptom terms, treatment terms, prognosis terms and modal verbs indicating possibility.

CONCLUSIONS : NLP methods can identify narrative arcs in serious illness conversations.

PRACTICE IMPLICATIONS : Fully automating NLP methods will allow for efficient, large scale and real time measurement of serious illness conversations for research, education and system re-design.

Ross Lindsay, Danforth Christopher M, Eppstein Margaret J, Clarfeld Laurence A, Durieux Brigitte N, Gramling Cailin J, Hirsch Laura, Rizzo Donna M, Gramling Robert


Artificial Intelligence, Communication, Conversation, Machine Learning, Narrative Analysis, Natural Language Processing, Palliative Care, Stories

General General

Computational modeling of the monoaminergic neurotransmitter and male neuroendocrine systems in an analysis of therapeutic neuroadaptation to chronic antidepressant.

In European neuropsychopharmacology : the journal of the European College of Neuropsychopharmacology ; h5-index 0.0

Second-line depression treatment involves augmentation with one (rarely two) additional drugs, of chronic administration of a selective serotonin reuptake inhibitor (SSRI), which is the first-line depression treatment. Unfortunately, many depressed patients still fail to respond even after months to years of searching to find an effective combination. To aid in the identification of potentially effective antidepressant combinations, we created a computational model of the monoaminergic neurotransmitter (serotonin, norepinephrine, and dopamine), stress-hormone (cortisol), and male sex hormone (testosterone) systems. The model was trained via machine learning to represent a broad range of empirical observations. Neuroadaptation to chronic drug administration was simulated through incremental adjustments in model parameters that corresponded to key regulatory components of the neurotransmitter and neurohormone systems. Analysis revealed that neuroadaptation in the model depended on all of the regulatory components in complicated ways, and did not reveal any one or a few specific components that could be targeted in the design of antidepressant treatments. We used large sets of neuroadapted states of the model to screen 74 different drug and hormone combinations and identified several combinations that could potentially be therapeutic for a higher proportion of male patients than SSRIs by themselves.

Camacho Mariam Bonyadi, Vijitbenjaronk Warut D, Anastasio Thomas J


Depression, Drug discovery, Gonadal hormones, Machine learning, Neuropharmacology, Systems biology

General General

Towards the automation of early-stage human embryo development detection.

In Biomedical engineering online ; h5-index 0.0

BACKGROUND : Infertility and subfertility affect a significant proportion of humanity. Assisted reproductive technology has been proven capable of alleviating infertility issues. In vitro fertilisation is one such option whose success is highly dependent on the selection of a high-quality embryo for transfer. This is typically done manually by analysing embryos under a microscope. However, evidence has shown that the success rate of manual selection remains low. The use of new incubators with integrated time-lapse imaging system is providing new possibilities for embryo assessment. As such, we address this problem by proposing an approach based on deep learning for automated embryo quality evaluation through the analysis of time-lapse images. Automatic embryo detection is complicated by the topological changes of a tracked object. Moreover, the algorithm should process a large number of image files of different qualities in a reasonable amount of time.

METHODS : We propose an automated approach to detect human embryo development stages during incubation and to highlight embryos with abnormal behaviour by focusing on five different stages. This method encompasses two major steps. First, the location of an embryo in the image is detected by employing a Haar feature-based cascade classifier and leveraging the radiating lines. Then, a multi-class prediction model is developed to identify a total cell number in the embryo using the technique of deep learning.

RESULTS : The experimental results demonstrate that the proposed method achieves an accuracy of at least 90% in the detection of embryo location. The implemented deep learning approach to identify the early stages of embryo development resulted in an overall accuracy of over 92% using the selected architectures of convolutional neural networks. The most problematic stage was the 3-cell stage, presumably due to its short duration during development.

CONCLUSION : This research contributes to the field by proposing a model to automate the monitoring of early-stage human embryo development. Unlike in other imaging fields, only a few published attempts have involved leveraging deep learning in this field. Therefore, the approach presented in this study could be used in the creation of novel algorithms integrated into the assisted reproductive technology used by embryologists.

Raudonis Vidas, Paulauskaite-Taraseviciene Agne, Sutiene Kristina, Jonaitis Domas


Deep learning, Embryo development, Image recognition, Location detection, Multi-class prediction

General General

A low-cost vision system based on the analysis of motor features for recognition and severity rating of Parkinson's Disease.

In BMC medical informatics and decision making ; h5-index 38.0

BACKGROUND : Assessment and rating of Parkinson's Disease (PD) are commonly based on the medical observation of several clinical manifestations, including the analysis of motor activities. In particular, medical specialists refer to the MDS-UPDRS (Movement Disorder Society - sponsored revision of Unified Parkinson's Disease Rating Scale) that is the most widely used clinical scale for PD rating. However, clinical scales rely on the observation of some subtle motor phenomena that are either difficult to capture with human eyes or could be misclassified. This limitation motivated several researchers to develop intelligent systems based on machine learning algorithms able to automatically recognize the PD. Nevertheless, most of the previous studies investigated the classification between healthy subjects and PD patients without considering the automatic rating of different levels of severity.

METHODS : In this context, we implemented a simple and low-cost clinical tool that can extract postural and kinematic features with the Microsoft Kinect v2 sensor in order to classify and rate PD. Thirty participants were enrolled for the purpose of the present study: sixteen PD patients rated according to MDS-UPDRS and fourteen healthy paired subjects. In order to investigate the motor abilities of the upper and lower body, we acquired and analyzed three main motor tasks: (1) gait, (2) finger tapping, and (3) foot tapping. After preliminary feature selection, different classifiers based on Support Vector Machine (SVM) and Artificial Neural Networks (ANN) were trained and evaluated for the best solution.

RESULTS : Concerning the gait analysis, results showed that the ANN classifier performed the best by reaching 89.4% of accuracy with only nine features in diagnosis PD and 95.0% of accuracy with only six features in rating PD severity. Regarding the finger and foot tapping analysis, results showed that an SVM using the extracted features was able to classify healthy subjects versus PD patients with great performances by reaching 87.1% of accuracy. The results of the classification between mild and moderate PD patients indicated that the foot tapping features were the most representative ones to discriminate (81.0% of accuracy).

CONCLUSIONS : The results of this study have shown how a low-cost vision-based system can automatically detect subtle phenomena featuring the PD. Our findings suggest that the proposed tool can support medical specialists in the assessment and rating of PD patients in a real clinical scenario.

Buongiorno Domenico, Bortone Ilaria, Cascarano Giacomo Donato, Trotta Gianpaolo Francesco, Brunetti Antonio, Bevilacqua Vitoantonio


Artificial neural network, Classification, Feature selection, Finger tapping, Foot tapping, Gait analysis, MDS-UPDRS, Microsoft kinect v2, Parkinson’s disease, Support vector machine

General General

Implementation of machine learning algorithms to create diabetic patient re-admission profiles.

In BMC medical informatics and decision making ; h5-index 38.0

BACKGROUND : Machine learning is a branch of Artificial Intelligence that is concerned with the design and development of algorithms, and it enables today's computers to have the property of learning. Machine learning is gradually growing and becoming a critical approach in many domains such as health, education, and business.

METHODS : In this paper, we applied machine learning to the diabetes dataset with the aim of recognizing patterns and combinations of factors that characterizes or explain re-admission among diabetes patients. The classifiers used include Linear Discriminant Analysis, Random Forest, k-Nearest Neighbor, Naïve Bayes, J48 and Support vector machine.

RESULTS : Of the 100,000 cases, 78,363 were diabetic and over 47% were readmitted.Based on the classes that models produced, diabetic patients who are more likely to be readmitted are either women, or Caucasians, or outpatients, or those who undergo less rigorous lab procedures, treatment procedures, or those who receive less medication, and are thus discharged without proper improvements or administration of insulin despite having been tested positive for HbA1c.

CONCLUSION : Diabetic patients who do not undergo vigorous lab assessments, diagnosis, medications are more likely to be readmitted when discharged without improvements and without receiving insulin administration, especially if they are women, Caucasians, or both.

Alloghani Mohamed, Aljaaf Ahmed, Hussain Abir, Baker Thar, Mustafina Jamila, Al-Jumeily Dhiya, Khalaf Mohammed


Algorithms, Diabetes re-admission, HbA1c, Linear discriminant, Machine learning, Support vector machine

General General

A comparison between two semantic deep learning frameworks for the autosomal dominant polycystic kidney disease segmentation based on magnetic resonance images.

In BMC medical informatics and decision making ; h5-index 38.0

BACKGROUND : The automatic segmentation of kidneys in medical images is not a trivial task when the subjects undergoing the medical examination are affected by Autosomal Dominant Polycystic Kidney Disease (ADPKD). Several works dealing with the segmentation of Computed Tomography images from pathological subjects were proposed, showing high invasiveness of the examination or requiring interaction by the user for performing the segmentation of the images. In this work, we propose a fully-automated approach for the segmentation of Magnetic Resonance images, both reducing the invasiveness of the acquisition device and not requiring any interaction by the users for the segmentation of the images.

METHODS : Two different approaches are proposed based on Deep Learning architectures using Convolutional Neural Networks (CNN) for the semantic segmentation of images, without needing to extract any hand-crafted features. In details, the first approach performs the automatic segmentation of images without any procedure for pre-processing the input. Conversely, the second approach performs a two-steps classification strategy: a first CNN automatically detects Regions Of Interest (ROIs); a subsequent classifier performs the semantic segmentation on the ROIs previously extracted.

RESULTS : Results show that even though the detection of ROIs shows an overall high number of false positives, the subsequent semantic segmentation on the extracted ROIs allows achieving high performance in terms of mean Accuracy. However, the segmentation of the entire images input to the network remains the most accurate and reliable approach showing better performance than the previous approach.

CONCLUSION : The obtained results show that both the investigated approaches are reliable for the semantic segmentation of polycystic kidneys since both the strategies reach an Accuracy higher than 85%. Also, both the investigated methodologies show performances comparable and consistent with other approaches found in literature working on images from different sources, reducing both the invasiveness of the analyses and the interaction needed by the users for performing the segmentation task.

Bevilacqua Vitoantonio, Brunetti Antonio, Cascarano Giacomo Donato, Guerriero Andrea, Pesce Francesco, Moschetta Marco, Gesualdo Loreto


ADPKD, Convolutional neural network, Deep learning, Magnetic resonance, R-CNN, Semantic segmentation