Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Briefings in bioinformatics

Qualitative or quantitative prediction models of structure-activity relationships based on graph neural networks (GNNs) are prevalent in drug discovery applications and commonly have excellently predictive power. However, the network information flows of GNNs are highly complex and accompanied by poor interpretability. Unfortunately, there are relatively less studies on GNN attributions, and their developments in drug research are still at the early stages. In this work, we adopted several advanced attribution techniques for different GNN frameworks and applied them to explain multiple drug molecule property prediction tasks, enabling the identification and visualization of vital chemical information in the networks. Additionally, we evaluated them quantitatively with attribution metrics such as accuracy, sparsity, fidelity and infidelity, stability and sensitivity; discussed their applicability and limitations; and provided an open-source benchmark platform for researchers. The results showed that all attribution techniques were effective, while those directly related to the predicted labels, such as integrated gradient, preferred to have better attribution performance. These attribution techniques we have implemented could be directly used for the vast majority of chemical GNN interpretation tasks.

Wang Yimeng, Huang Mengting, Deng Hua, Li Weihua, Wu Zengrui, Tang Yun, Liu Guixia

2022-Dec-19

SAR, explainable artificial intelligence, graph neural networks, interpretability, visualization