Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Using Machine Learning and Smartphone and Smartwatch Data to Detect Emotional States and Transitions: Exploratory Study.

In JMIR mHealth and uHealth

BACKGROUND : Emotional state in everyday life is an essential indicator of health and well-being. However, daily assessment of emotional states largely depends on active self-reports, which are often inconvenient and prone to incomplete information. Automated detection of emotional states and transitions on a daily basis could be an effective solution to this problem. However, the relationship between emotional transitions and everyday context remains to be unexplored.

OBJECTIVE : This study aims to explore the relationship between contextual information and emotional transitions and states to evaluate the feasibility of detecting emotional transitions and states from daily contextual information using machine learning (ML) techniques.

METHODS : This study was conducted on the data of 18 individuals from a publicly available data set called ExtraSensory. Contextual and sensor data were collected using smartphone and smartwatch sensors in a free-living condition, where the number of days for each person varied from 3 to 9. Sensors included an accelerometer, a gyroscope, a compass, location services, a microphone, a phone state indicator, light, temperature, and a barometer. The users self-reported approximately 49 discrete emotions at different intervals via a smartphone app throughout the data collection period. We mapped the 49 reported discrete emotions to the 3 dimensions of the pleasure, arousal, and dominance model and considered 6 emotional states: discordant, pleased, dissuaded, aroused, submissive, and dominant. We built general and personalized models for detecting emotional transitions and states every 5 min. The transition detection problem is a binary classification problem that detects whether a person's emotional state has changed over time, whereas state detection is a multiclass classification problem. In both cases, a wide range of supervised ML algorithms were leveraged, in addition to data preprocessing, feature selection, and data imbalance handling techniques. Finally, an assessment was conducted to shed light on the association between everyday context and emotional states.

RESULTS : This study obtained promising results for emotional state and transition detection. The best area under the receiver operating characteristic (AUROC) curve for emotional state detection reached 60.55% in the general models and an average of 96.33% across personalized models. Despite the highly imbalanced data, the best AUROC curve for emotional transition detection reached 90.5% in the general models and an average of 88.73% across personalized models. In general, feature analyses show that spatiotemporal context, phone state, and motion-related information are the most informative factors for emotional state and transition detection. Our assessment showed that lifestyle has an impact on the predictability of emotion.

CONCLUSIONS : Our results demonstrate a strong association of daily context with emotional states and transitions as well as the feasibility of detecting emotional states and transitions using data from smartphone and smartwatch sensors.

Sultana Madeena, Al-Jefri Majed, Lee Joon

2020-Sep-29

artificial intelligence, digital biomarkers, digital phenotyping, emotion detection, emotional transition detection, mHealth, mental health, mobile phone, spatiotemporal context, supervised machine learning

General General

Development of a Social Network for People Without a Diagnosis (RarePairs): Evaluation Study.

In Journal of medical Internet research ; h5-index 88.0

BACKGROUND : Diagnostic delay in rare disease (RD) is common, occasionally lasting up to more than 20 years. In attempting to reduce it, diagnostic support tools have been studied extensively. However, social platforms have not yet been used for systematic diagnostic support. This paper illustrates the development and prototypic application of a social network using scientifically developed questions to match individuals without a diagnosis.

OBJECTIVE : The study aimed to outline, create, and evaluate a prototype tool (a social network platform named RarePairs), helping patients with undiagnosed RDs to find individuals with similar symptoms. The prototype includes a matching algorithm, bringing together individuals with similar disease burden in the lead-up to diagnosis.

METHODS : We divided our project into 4 phases. In phase 1, we used known data and findings in the literature to understand and specify the context of use. In phase 2, we specified the user requirements. In phase 3, we designed a prototype based on the results of phases 1 and 2, as well as incorporating a state-of-the-art questionnaire with 53 items for recognizing an RD. Lastly, we evaluated this prototype with a data set of 973 questionnaires from individuals suffering from different RDs using 24 distance calculating methods.

RESULTS : Based on a step-by-step construction process, the digital patient platform prototype, RarePairs, was developed. In order to match individuals with similar experiences, it uses answer patterns generated by a specifically designed questionnaire (Q53). A total of 973 questionnaires answered by patients with RDs were used to construct and test an artificial intelligence (AI) algorithm like the k-nearest neighbor search. With this, we found matches for every single one of the 973 records. The cross-validation of those matches showed that the algorithm outperforms random matching significantly. Statistically, for every data set the algorithm found at least one other record (match) with the same diagnosis.

CONCLUSIONS : Diagnostic delay is torturous for patients without a diagnosis. Shortening the delay is important for both doctors and patients. Diagnostic support using AI can be promoted differently. The prototype of the social media platform RarePairs might be a low-threshold patient platform, and proved suitable to match and connect different individuals with comparable symptoms. This exchange promoted through RarePairs might be used to speed up the diagnostic process. Further studies include its evaluation in a prospective setting and implementation of RarePairs as a mobile phone app.

Kühnle Lara, Mücke Urs, Lechner Werner M, Klawonn Frank, Grigull Lorenz

2020-Sep-29

artificial intelligence, diagnostic support tool, machine learning, prototype, rare disease, social network

General General

Achieving better connections between deposited lines in additive manufacturing via machine learning.

In Mathematical biosciences and engineering : MBE

Additive manufacturing is becoming increasingly popular because of its unique advantages, especially fused deposition modelling (FDM) which has been widely used due to its simplicity and comparatively low price. All the process parameters of FDM can be changed to achieve different goals. For example, lower print speed may lead to higher strength of the fabricated parts. While changing these parameters (e.g. print speed, layer height, filament extrusion speed and path distance in a layer), the connection between paths (lines) in a layer will be changed. To achieve the best connection among paths in a real printing process, how these parameters may result in what kind of connection should be studied. In this paper, a machine learning (deep neural network) model is proposed to predict the connection between paths in different process parameters. Four hundred experiments were conducted on an FDM machine to obtain the corresponding connection status data. Among them, there are 280 groups of data that were used to train the machine learning model, while the rest 120 groups of data were used for testing. The results show that this machine learning model can predict the connection status with the accuracy of around 83%. In the future, this model can be used to select the best process parameters in additive manufacturing processes with corresponding objectives.

Jiang Jing Chao, Yu Chun Ling, Xu Xun, Ma Yong Sheng, Liu Ji Kai

2020-Apr-30

** additive manufacturing , connection , deep neural network , machine learning **

Radiology Radiology

Detecting Large Vessel Occlusion at Multiphase CT Angiography by Using a Deep Convolutional Neural Network.

In Radiology ; h5-index 91.0

Background Large vessel occlusion (LVO) stroke is one of the most time-sensitive diagnoses in medicine and requires emergent endovascular therapy to reduce morbidity and mortality. Leveraging recent advances in deep learning may facilitate rapid detection and reduce time to treatment. Purpose To develop a convolutional neural network to detect LVOs at multiphase CT angiography. Materials and Methods This multicenter retrospective study evaluated 540 adults with CT angiography examinations for suspected acute ischemic stroke from February 2017 to June 2018. Examinations positive for LVO (n = 270) were confirmed by catheter angiography and LVO-negative examinations (n = 270) were confirmed through review of clinical and radiology reports. Preprocessing of the CT angiography examinations included vasculature segmentation and the creation of maximum intensity projection images to emphasize the contrast agent-enhanced vasculature. Seven experiments were performed by using combinations of the three phases (arterial, phase 1; peak venous, phase 2; and late venous, phase 3) of the CT angiography. Model performance was evaluated on the held-out test set. Metrics included area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Results The test set included 62 patients (mean age, 69.5 years; 48% women). Single-phase CT angiography achieved an AUC of 0.74 (95% confidence interval [CI]: 0.63, 0.85) with sensitivity of 77% (24 of 31; 95% CI: 59%, 89%) and specificity of 71% (22 of 31; 95% CI: 53%, 84%). Phases 1, 2, and 3 together achieved an AUC of 0.89 (95% CI: 0.81, 0.96), sensitivity of 100% (31 of 31; 95% CI: 99%, 100%), and specificity of 77% (24 of 31; 95% CI: 59%, 89%), a statistically significant improvement relative to single-phase CT angiography (P = .01). Likewise, phases 1 and 3 and phases 2 and 3 also demonstrated improved fit relative to single phase (P = .03). Conclusion This deep learning model was able to detect the presence of large vessel occlusion and its diagnostic performance was enhanced by using delayed phases at multiphase CT angiography examinations. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Ospel and Goyal in this issue.

Stib Matthew T, Vasquez Justin, Dong Mary P, Kim Yun Ho, Subzwari Sumera S, Triedman Harold J, Wang Amy, Wang Hsin-Lei Charlene, Yao Anthony D, Jayaraman Mahesh, Boxerman Jerrold L, Eickhoff Carsten, Cetintemel Ugur, Baird Grayson L, McTaggart Ryan A

2020-Sep-29

Public Health Public Health

Machine learning prediction of the adverse outcome for nontraumatic subarachnoid hemorrhage patients.

In Annals of clinical and translational neurology

OBJECTIVE : Subarachnoid hemorrhage (SAH) is often devastating with increased early mortality, particularly in those with presumed delayed cerebral ischemia (DCI). The ability to accurately predict survival for SAH patients during the hospital course would provide valuable information for healthcare providers, patients, and families. This study aims to utilize electronic health record (EHR) data and machine learning approaches to predict the adverse outcome for nontraumatic SAH adult patients.

METHODS : The cohort included nontraumatic SAH patients treated with vasopressors for presumed DCI from a large EHR database, the Cerner Health Facts® EMR database (2000-2014). The outcome of interest was the adverse outcome, defined as death in hospital or discharged to hospice. Machine learning-based models were developed and primarily assessed by area under the receiver operating characteristic curve (AUC).

RESULTS : A total of 2467 nontraumatic SAH patients (64% female; median age [interquartile range]: 56 [47-66]) who were treated with vasopressors for presumed DCI were included in the study. 934 (38%) patients died or were discharged to hospice. The model achieved an AUC of 0.88 (95% CI, 0.84-0.92) with only the initial 24 h EHR data, and 0.94 (95% CI, 0.92-0.96) after the next 24 h.

INTERPRETATION : EHR data and machine learning models can accurately predict the risk of the adverse outcome for critically ill nontraumatic SAH patients. It is possible to use EHR data and machine learning techniques to help with clinical decision-making.

Yu Duo, Williams George W, Aguilar David, Yamal José-Miguel, Maroufy Vahed, Wang Xueying, Zhang Chenguang, Huang Yuefan, Gu Yuxuan, Talebi Yashar, Wu Hulin

2020-Sep-29

oncology Oncology

Deep learning analysis of the primary tumour and the prediction of lymph node metastases in gastric cancer.

In The British journal of surgery

BACKGROUND : Lymph node metastasis (LNM) in gastric cancer is a prognostic factor and has implications for the extent of lymph node dissection. The lymphatic drainage of the stomach involves multiple nodal stations with different risks of metastases. The aim of this study was to develop a deep learning system for predicting LNMs in multiple nodal stations based on preoperative CT images in patients with gastric cancer.

METHODS : Preoperative CT images from patients who underwent gastrectomy with lymph node dissection at two medical centres were analysed retrospectively. Using a discovery patient cohort, a system of deep convolutional neural networks was developed to predict pathologically confirmed LNMs at 11 regional nodal stations. To gain understanding about the networks' prediction ability, gradient-weighted class activation mapping for visualization was assessed. The performance was tested in an external cohort of patients by analysis of area under the receiver operating characteristic (ROC) curves (AUC), sensitivity and specificity.

RESULTS : The discovery and external cohorts included 1172 and 527 patients respectively. The deep learning system demonstrated excellent prediction accuracy in the external validation cohort, with a median AUC of 0·876 (range 0·856-0·893), sensitivity of 0·743 (0·551-0·859) and specificity of 0·936 (0·672-0·966) for 11 nodal stations. The imaging models substantially outperformed clinicopathological variables for predicting LNMs (median AUC 0·652, range 0·571-0·763). By visualizing nearly 19 000 subnetworks, imaging features related to intratumoral heterogeneity and the invasive front were found to be most useful for predicting LNMs.

CONCLUSION : A deep learning system for the prediction of LNMs was developed based on preoperative CT images of gastric cancer. The models require further validation but may be used to inform prognosis and guide individualized surgical treatment.

Jin C, Jiang Y, Yu H, Wang W, Li B, Chen C, Yuan Q, Hu Y, Xu Y, Zhou Z, Li G, Li R

2020-Sep-29