In Neural networks : the official journal of the International Neural Network Society
BACKGROUND AND OBJECTIVE : Deep learning is applied in medicine mostly due to its state-of-the-art performance for diagnostic imaging. Supervisory authorities also require the model to be explainable, but most explain the model after development (post hoc) instead of incorporating explanation into the design (ante hoc). This study aimed to demonstrate a human-guided deep learning with ante-hoc explainability by convolutional network from non-image data to develop, validate, and deploy a prognostic prediction model for PROM and an estimator of time of delivery using a nationwide health insurance database.
METHODS : To guide modeling, we constructed and verified association diagrams respectively from literatures and electronic health records. Non-image data were transformed into meaningful images utilizing predictor-to-predictor similarities, harnessing the power of convolutional neural network mostly used for diagnostic imaging. The network architecture was also inferred from the similarities.
RESULTS : This resulted the best model for prelabor rupture of membranes (n=883, 376) with the area under curves 0.73 (95% CI 0.72 to 0.75) and 0.70 (95% CI 0.69 to 0.71) respectively by internal and external validations, and outperformed previous models found by systematic review. It was explainable by knowledge-based diagrams and model representation.
CONCLUSIONS : This allows prognostication with actionable insights for preventive medicine.
Sufriyana Herdiantri, Wu Yu-Wei, Su Emily Chia-Yu
2023-Feb-24
Causal diagram, Deep learning, Electronic health records, Explainable artificial intelligence, Prelabor rupture of membranes