Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Work effort, readability and quality of pharmacy transcription of patient directions from electronic prescriptions: a retrospective observational cohort analysis.

In BMJ quality & safety

BACKGROUND : Free-text directions generated by prescribers in electronic prescriptions can be difficult for patients to understand due to their variability, complexity and ambiguity. Pharmacy staff are responsible for transcribing these directions so that patients can take their medication as prescribed. However, little is known about the quality of these transcribed directions received by patients.

METHODS : A retrospective observational analysis of 529 990 e-prescription directions processed at a mail-order pharmacy in the USA. We measured pharmacy staff editing of directions using string edit distance and execution time using the Keystroke-Level Model. Using the New Dale-Chall (NDC) readability formula, we calculated NDC cloze scores of the patient directions before and after transcription. We also evaluated the quality of directions (eg, included a dose, dose unit, frequency of administration) before and after transcription with a random sample of 966 patient directions.

RESULTS : Pharmacy staff edited 83.8% of all e-prescription directions received with a median edit distance of 18 per e-prescription. We estimated a median of 6.64 s of transcribing each e-prescription. The median NDC score increased by 68.6% after transcription (26.12 vs 44.03, p<0.001), which indicated a significant readability improvement. In our sample, 51.4% of patient directions on e-prescriptions contained at least one pre-defined direction quality issue. Pharmacy staff corrected 79.5% of the quality issues.

CONCLUSION : Pharmacy staff put significant effort into transcribing e-prescription directions. Manual transcription removed the majority of quality issues; however, pharmacy staff still miss or introduce following their manual transcription processes. The development of tools and techniques such as a comprehensive set of structured direction components or machine learning-based natural language processing techniques may help produce clear directions.

Zheng Yifan, Jiang Yun, Dorsch Michael P, Ding Yuting, Vydiswaran V G Vinod, Lester Corey A

2020-May-25

human error, human factors, information technology, medication safety, pharmacists

General General

How can endoscopists adapt and collaborate with artificial intelligence for early gastric cancer detection?

In Digestive endoscopy : official journal of the Japan Gastroenterological Endoscopy Society

Early detection is essential to improve the prognosis and mortality of gastric cancer, particularly in countries with high incidence of gastric cancer such as Japan and Korea. Endoscopy has been recently accepted as a primary tool in population-based gastric cancer screening 1 . Early detection also allows for minimally invasive endoscopic resection which has been shown to have excellent overall survival comparable to gastrectomy, while preserving stomach function.

Abe Seiichiro, Oda Ichiro

2020-May-26

General General

Development and validation of explainable AI-based decision-supporting tool for prostate biopsy.

In BJU international ; h5-index 62.0

OBJECTIVES : To develop and validate a risk calculator for prostate cancer (PC) and clinically significant PC (csPC) using explainable artificial intelligence (XAI).

MATERIALS AND METHODS : We used data of 3791 patients to develop and validate the risk calculator. We initially divided the data into development and validation sets. An extreme gradient-boosting algorithm was applied to the development calculator using five-fold cross-validation with hyperparameter tuning following feature selection in the development set. The model feature importance was determined based on the Shapley value. The area under the curve (AUC) of the receiver operating characteristic curve was analysed for each validation set of the calculator.

RESULTS : Approximately 1216 (32.7%) and 562 (14.8%) patients were diagnosed with PC and csPC. The data of 2843 patients were used for development, whereas the data of 948 patients were used as a test set. We selected the variables for each PC and csPC risk calculation according to the least absolute shrinkage and selection operator regression. The AUC of the final PC model was 0.869 (95% confidence interval (CI); 0.844 to 0.893), whereas that of the csPC model was 0.945 (95% CI; 0.927 to 0.963). The prostate-specific antigen (PSA), free PSA, age, prostate volume (both the transitional zone and total), hypoechoic lesions on ultrasound, and testosterone level were found to be important parameters in the PC model. The number of previous biopsies was not associated with the risk of csPC, but was negatively associated with the risk of PC.

CONCLUSION : We successfully developed and validated a decision-supporting tool using XAI for calculating the probability of PC and csPC prior to prostate biopsy.

Suh Jungyo, Yoo Sangjun, Park Juhyun, Cho Sung Yong, Cho Min Chul, Son Hwancheol, Jeong Hyeon

2020-May-26

Decision-supporting tool, Explainable AI, Machine learning, Prediction model, Prostate cancer, Web-based model

General General

Targeting Precision with Data Augmented Samples in Deep Learning.

In Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention

In the last five years, deep learning (DL) has become the state-of-the-art tool for solving various tasks in medical image analysis. Among the different methods that have been proposed to improve the performance of Convolutional Neural Networks (CNNs), one typical approach is the augmentation of the training data set through various transformations of the input image. Data augmentation is typically used in cases where a small amount of data is available, such as the majority of medical imaging problems, to present a more substantial amount of data to the network and improve the overall accuracy. However, the ability of the network to improve the accuracy of the results when a slightly modified version of the same input is presented is often overestimated. This overestimation is the result of the strong correlation between data samples when they are considered independently in the training phase. In this paper, we emphasize the importance of optimizing for accuracy as well as precision among multiple replicates of the same training data in the context of data augmentation. To this end, we propose a new approach that leverages the augmented data to help the network focus on the precision through a specifically-designed loss function, with the ultimate goal to improve both the overall performance and the network's precision at the same time. We present two different applications of DL (regression and segmentation) to demonstrate the strength of the proposed strategy. We think that this work will pave the way to a explicit use of data augmentation within the loss function that helps the network to be invariant to small variations of the same input samples, a characteristic that is always required to every application in the medical imaging field.

Nardelli Pietro, Estépar Raúl San José

2019-Oct

Accuracy, Data augmentation, Deep learning, Precision

General General

Conceptual Organization is Revealed by Consumer Activity Patterns.

In Computational brain & behavior

Computational models using text corpora have proved useful in understanding the nature of language and human concepts. One appeal of this work is that text, such as from newspaper articles, should reflect human behaviour and conceptual organization outside the laboratory. However, texts do not directly reflect human activity, but instead serve a communicative function and are highly curated or edited to suit an audience. Here, we apply methods devised for text to a data source that directly reflects thousands of individuals' activity patterns. Using product co-occurrence data from nearly 1.3-m supermarket shopping baskets, we trained a topic model to learn 25 high-level concepts (or topics). These topics were found to be comprehensible and coherent by both retail experts and consumers. The topics indicated that human concepts are primarily organized around goals and interactions (e.g. tomatoes go well with vegetables in a salad), rather than their intrinsic features (e.g. defining a tomato by the fact that it has seeds and is fleshy). These results are consistent with the notion that human conceptual knowledge is tailored to support action. Individual differences in the topics sampled predicted basic demographic characteristics. Our findings suggest that human activity patterns can reveal conceptual organization and may give rise to it.

Hornsby Adam N, Evans Thomas, Riefer Peter S, Prior Rosie, Love Bradley C

2020

Big data, Cognition, Computational social science, Decision making, Machine learning

Radiology Radiology

Abdominal musculature segmentation and surface prediction from CT using deep learning for sarcopenia assessment.

In Diagnostic and interventional imaging

PURPOSE : The purpose of this study was to build and train a deep convolutional neural networks (CNN) algorithm to segment muscular body mass (MBM) to predict muscular surface from a two-dimensional axial computed tomography (CT) slice through L3 vertebra.

MATERIALS AND METHODS : An ensemble of 15 deep learning models with a two-dimensional U-net architecture with a 4-level depth and 18 initial filters were trained to segment MBM. The muscular surface values were computed from the predicted masks and corrected with the algorithm's estimated bias. Resulting mask prediction and surface prediction were assessed using Dice similarity coefficient (DSC) and root mean squared error (RMSE) scores respectively using ground truth masks as standards of reference.

RESULTS : A total of 1025 individual CT slices were used for training and validation and 500 additional axial CT slices were used for testing. The obtained mean DSC and RMSE on the test set were 0.97 and 3.7 cm2 respectively.

CONCLUSION : Deep learning methods using convolutional neural networks algorithm enable a robust and automated extraction of CT derived MBM for sarcopenia assessment, which could be implemented in a clinical workflow.

Blanc-Durand P, Schiratti J-B, Schutte K, Jehanno P, Herent P, Pigneur F, Lucidarme O, Benaceur Y, Sadate A, Luciani A, Ernst O, Rouchaud A, Creuze M, Dallongeville A, Banaste N, Cadi M, Bousaid I, Lassau N, Jegou S

2020-May-22

Convolutional neural networks (CNN), Deep learning, Muscular body bass, Sarcopenia, Tomography, X-ray computed