Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In BMC medical informatics and decision making ; h5-index 38.0

BACKGROUND : The interpretability of results predicted by the machine learning models is vital, especially in the critical fields like healthcare. With the increasingly adoption of electronic healthcare records (EHR) by the medical organizations in the last decade, which accumulated abundant electronic patient data, neural networks or deep learning techniques are gradually being applied to clinical tasks by utilizing the huge potential of EHR data. However, typical deep learning models are black-boxes, which are not transparent and the prediction outcomes of which are difficult to interpret.

METHODS : To remedy this limitation, we propose an attention neural network model for interpretable clinical prediction. In detail, the proposed model employs an attention mechanism to capture critical/essential features with their attention signals on the prediction results, such that the predictions generated by the neural network model can be interpretable.

RESULTS : We evaluate our proposed model on a real-world clinical dataset consisting of 736 samples to predict readmissions for heart failure patients. The performance of the proposed model achieved 66.7 and 69.1% in terms of accuracy and AUC, respectively, and outperformed the baseline models. Besides, we displayed patient-specific attention weights, which can not only help clinicians understand the prediction outcomes, but also assist them to select individualized treatment strategies or intervention plans.

CONCLUSIONS : The experimental results demonstrate that the proposed model can improve both the prediction performance and interpretability by equipping the model with an attention mechanism.

Chen Peipei, Dong Wei, Wang Jinliang, Lu Xudong, Kaymak Uzay, Huang Zhengxing


Attention mechanism, Clinical prediction, Deep learning, Interpretability