In Statistics in medicine
The interpretability of machine learning models, even though with an excellent prediction performance, remains a challenge in practical applications. The model interpretability and variable importance for well-performed supervised machine learning models are investigated in this study. With the commonly accepted concept of odds ratio (OR), we propose a novel and computationally efficient Variable Importance evaluation framework based on the Personalized Odds Ratio (VIPOR). It is a model-agnostic interpretation method that can be used to evaluate variable importance both locally and globally. Locally, the variable importance is quantified by the personalized odds ratio (POR), which can account for subject heterogeneity in machine learning. Globally, we utilize a hierarchical tree to group the predictors into five groups: completely positive, completely negative, positive dominated, negative dominated, and neutral groups. The relative importance of predictors within each group is ranked based on different statistics of PORs across subjects for different application purposes. For illustration, we apply the proposed VIPOR method to interpreting a multilayer perceptron (MLP) model, which aims to predict the mortality of subarachnoid hemorrhage (SAH) patients using real-world electronic health records (EHR) data. We compare the important variables derived from MLP with other machine learning models, including tree-based models and the L1-regularized logistic regression model. The top importance variables are consistently identified by VIPOR across different prediction models. Comparisons with existing interpretation methods are also conducted and discussed based on publicly available data sets.
Yu Duo, Wu Hulin
2023-Jan-05
electronic health records, interpretable machine learning, predictive modeling, variable importance